00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 226 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.136 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.138 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.285 > git --version # 'git version 2.39.2' 00:00:00.285 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.306 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.306 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.500 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.511 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.522 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.522 > git config core.sparsecheckout # timeout=10 00:00:06.531 > git read-tree -mu HEAD # timeout=10 00:00:06.546 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.575 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.575 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.707 [Pipeline] Start of Pipeline 00:00:06.718 [Pipeline] library 00:00:06.719 Loading library shm_lib@master 00:00:06.719 Library shm_lib@master is cached. Copying from home. 00:00:06.730 [Pipeline] node 00:00:06.742 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.744 [Pipeline] { 00:00:06.750 [Pipeline] catchError 00:00:06.751 [Pipeline] { 00:00:06.760 [Pipeline] wrap 00:00:06.765 [Pipeline] { 00:00:06.770 [Pipeline] stage 00:00:06.771 [Pipeline] { (Prologue) 00:00:06.952 [Pipeline] sh 00:00:07.240 + logger -p user.info -t JENKINS-CI 00:00:07.259 [Pipeline] echo 00:00:07.260 Node: WFP21 00:00:07.268 [Pipeline] sh 00:00:07.576 [Pipeline] setCustomBuildProperty 00:00:07.586 [Pipeline] echo 00:00:07.587 Cleanup processes 00:00:07.592 [Pipeline] sh 00:00:07.878 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.878 2523057 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.890 [Pipeline] sh 00:00:08.178 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.178 ++ grep -v 'sudo pgrep' 00:00:08.178 ++ awk '{print $1}' 00:00:08.178 + sudo kill -9 00:00:08.178 + true 00:00:08.193 [Pipeline] cleanWs 00:00:08.203 [WS-CLEANUP] Deleting project workspace... 00:00:08.204 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.210 [WS-CLEANUP] done 00:00:08.215 [Pipeline] setCustomBuildProperty 00:00:08.230 [Pipeline] sh 00:00:08.518 + sudo git config --global --replace-all safe.directory '*' 00:00:08.623 [Pipeline] httpRequest 00:00:09.590 [Pipeline] echo 00:00:09.592 Sorcerer 10.211.164.20 is alive 00:00:09.601 [Pipeline] retry 00:00:09.603 [Pipeline] { 00:00:09.617 [Pipeline] httpRequest 00:00:09.621 HttpMethod: GET 00:00:09.622 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.622 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.635 Response Code: HTTP/1.1 200 OK 00:00:09.636 Success: Status code 200 is in the accepted range: 200,404 00:00:09.636 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.131 [Pipeline] } 00:00:16.148 [Pipeline] // retry 00:00:16.155 [Pipeline] sh 00:00:16.440 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.455 [Pipeline] httpRequest 00:00:16.835 [Pipeline] echo 00:00:16.837 Sorcerer 10.211.164.20 is alive 00:00:16.845 [Pipeline] retry 00:00:16.847 [Pipeline] { 00:00:16.859 [Pipeline] httpRequest 00:00:16.863 HttpMethod: GET 00:00:16.863 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.863 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.871 Response Code: HTTP/1.1 200 OK 00:00:16.871 Success: Status code 200 is in the accepted range: 200,404 00:00:16.871 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:24.554 [Pipeline] } 00:01:24.572 [Pipeline] // retry 00:01:24.580 [Pipeline] sh 00:01:24.865 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:27.410 [Pipeline] sh 00:01:27.692 + git -C spdk log --oneline -n5 00:01:27.692 b18e1bd62 version: v24.09.1-pre 00:01:27.692 19524ad45 version: v24.09 00:01:27.692 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:27.692 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:27.692 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:27.710 [Pipeline] withCredentials 00:01:27.720 > git --version # timeout=10 00:01:27.733 > git --version # 'git version 2.39.2' 00:01:27.750 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.752 [Pipeline] { 00:01:27.761 [Pipeline] retry 00:01:27.763 [Pipeline] { 00:01:27.778 [Pipeline] sh 00:01:28.060 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:28.333 [Pipeline] } 00:01:28.351 [Pipeline] // retry 00:01:28.357 [Pipeline] } 00:01:28.373 [Pipeline] // withCredentials 00:01:28.382 [Pipeline] httpRequest 00:01:28.755 [Pipeline] echo 00:01:28.757 Sorcerer 10.211.164.20 is alive 00:01:28.766 [Pipeline] retry 00:01:28.768 [Pipeline] { 00:01:28.782 [Pipeline] httpRequest 00:01:28.786 HttpMethod: GET 00:01:28.786 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.787 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.798 Response Code: HTTP/1.1 200 OK 00:01:28.798 Success: Status code 200 is in the accepted range: 200,404 00:01:28.799 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.319 [Pipeline] } 00:01:42.334 [Pipeline] // retry 00:01:42.341 [Pipeline] sh 00:01:42.626 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.020 [Pipeline] sh 00:01:44.307 + git -C dpdk log --oneline -n5 00:01:44.307 eeb0605f11 version: 23.11.0 00:01:44.307 238778122a doc: update release notes for 23.11 00:01:44.307 46aa6b3cfc doc: fix description of RSS features 00:01:44.307 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.307 7e421ae345 devtools: support skipping forbid rule check 00:01:44.317 [Pipeline] } 00:01:44.332 [Pipeline] // stage 00:01:44.341 [Pipeline] stage 00:01:44.343 [Pipeline] { (Prepare) 00:01:44.362 [Pipeline] writeFile 00:01:44.378 [Pipeline] sh 00:01:44.666 + logger -p user.info -t JENKINS-CI 00:01:44.679 [Pipeline] sh 00:01:44.965 + logger -p user.info -t JENKINS-CI 00:01:44.978 [Pipeline] sh 00:01:45.264 + cat autorun-spdk.conf 00:01:45.264 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.264 SPDK_TEST_NVMF=1 00:01:45.264 SPDK_TEST_NVME_CLI=1 00:01:45.264 SPDK_TEST_NVMF_NICS=mlx5 00:01:45.264 SPDK_RUN_UBSAN=1 00:01:45.264 NET_TYPE=phy 00:01:45.264 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.264 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:45.272 RUN_NIGHTLY=1 00:01:45.276 [Pipeline] readFile 00:01:45.299 [Pipeline] withEnv 00:01:45.301 [Pipeline] { 00:01:45.314 [Pipeline] sh 00:01:45.607 + set -ex 00:01:45.607 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:45.607 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:45.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.607 ++ SPDK_TEST_NVMF=1 00:01:45.607 ++ SPDK_TEST_NVME_CLI=1 00:01:45.607 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:45.607 ++ SPDK_RUN_UBSAN=1 00:01:45.607 ++ NET_TYPE=phy 00:01:45.607 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.607 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:45.607 ++ RUN_NIGHTLY=1 00:01:45.607 + case $SPDK_TEST_NVMF_NICS in 00:01:45.607 + DRIVERS=mlx5_ib 00:01:45.607 + [[ -n mlx5_ib ]] 00:01:45.607 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.607 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:52.182 rmmod: ERROR: Module irdma is not currently loaded 00:01:52.182 rmmod: ERROR: Module i40iw is not currently loaded 00:01:52.182 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:52.182 + true 00:01:52.182 + for D in $DRIVERS 00:01:52.182 + sudo modprobe mlx5_ib 00:01:52.182 + exit 0 00:01:52.192 [Pipeline] } 00:01:52.206 [Pipeline] // withEnv 00:01:52.212 [Pipeline] } 00:01:52.226 [Pipeline] // stage 00:01:52.235 [Pipeline] catchError 00:01:52.237 [Pipeline] { 00:01:52.251 [Pipeline] timeout 00:01:52.251 Timeout set to expire in 1 hr 0 min 00:01:52.253 [Pipeline] { 00:01:52.267 [Pipeline] stage 00:01:52.269 [Pipeline] { (Tests) 00:01:52.283 [Pipeline] sh 00:01:52.570 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.570 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.570 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:52.570 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:52.570 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:52.570 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:52.570 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:52.570 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:52.570 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:52.570 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:52.570 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:52.570 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.570 + source /etc/os-release 00:01:52.570 ++ NAME='Fedora Linux' 00:01:52.570 ++ VERSION='39 (Cloud Edition)' 00:01:52.570 ++ ID=fedora 00:01:52.570 ++ VERSION_ID=39 00:01:52.570 ++ VERSION_CODENAME= 00:01:52.570 ++ PLATFORM_ID=platform:f39 00:01:52.570 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.570 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.570 ++ LOGO=fedora-logo-icon 00:01:52.570 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.570 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.570 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.570 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.570 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.570 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.570 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.570 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.570 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.570 ++ SUPPORT_END=2024-11-12 00:01:52.570 ++ VARIANT='Cloud Edition' 00:01:52.570 ++ VARIANT_ID=cloud 00:01:52.570 + uname -a 00:01:52.570 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.570 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:55.861 Hugepages 00:01:55.861 node hugesize free / total 00:01:55.861 node0 1048576kB 0 / 0 00:01:55.861 node0 2048kB 0 / 0 00:01:55.861 node1 1048576kB 0 / 0 00:01:55.861 node1 2048kB 0 / 0 00:01:55.861 00:01:55.861 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.861 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:55.861 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:55.861 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:55.861 + rm -f /tmp/spdk-ld-path 00:01:55.861 + source autorun-spdk.conf 00:01:55.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.861 ++ SPDK_TEST_NVMF=1 00:01:55.861 ++ SPDK_TEST_NVME_CLI=1 00:01:55.861 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:55.861 ++ SPDK_RUN_UBSAN=1 00:01:55.861 ++ NET_TYPE=phy 00:01:55.861 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.861 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.861 ++ RUN_NIGHTLY=1 00:01:55.861 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.861 + [[ -n '' ]] 00:01:55.861 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.861 + for M in /var/spdk/build-*-manifest.txt 00:01:55.861 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.861 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.861 + for M in /var/spdk/build-*-manifest.txt 00:01:55.861 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.861 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.861 + for M in /var/spdk/build-*-manifest.txt 00:01:55.861 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.861 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.861 ++ uname 00:01:55.861 + [[ Linux == \L\i\n\u\x ]] 00:01:55.861 + sudo dmesg -T 00:01:55.861 + sudo dmesg --clear 00:01:55.861 + dmesg_pid=2524560 00:01:55.861 + [[ Fedora Linux == FreeBSD ]] 00:01:55.861 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.861 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.861 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.861 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.861 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.861 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.861 + sudo dmesg -Tw 00:01:55.861 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.861 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.861 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.861 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.861 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.861 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.861 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.861 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.861 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:55.861 Test configuration: 00:01:55.861 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.861 SPDK_TEST_NVMF=1 00:01:55.861 SPDK_TEST_NVME_CLI=1 00:01:55.861 SPDK_TEST_NVMF_NICS=mlx5 00:01:55.861 SPDK_RUN_UBSAN=1 00:01:55.861 NET_TYPE=phy 00:01:55.861 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.861 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.861 RUN_NIGHTLY=1 15:49:24 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:55.861 15:49:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:55.861 15:49:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.861 15:49:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.861 15:49:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.861 15:49:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.861 15:49:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.861 15:49:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.861 15:49:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.861 15:49:24 -- paths/export.sh@5 -- $ export PATH 00:01:55.862 15:49:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.862 15:49:24 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:55.862 15:49:24 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:55.862 15:49:24 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734274164.XXXXXX 00:01:55.862 15:49:24 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734274164.3G8owB 00:01:55.862 15:49:24 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:55.862 15:49:24 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:01:55.862 15:49:24 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.862 15:49:24 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:55.862 15:49:24 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:55.862 15:49:24 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.862 15:49:24 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:55.862 15:49:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:55.862 15:49:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.121 15:49:24 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:56.121 15:49:24 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:56.121 15:49:24 -- pm/common@17 -- $ local monitor 00:01:56.121 15:49:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.121 15:49:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.121 15:49:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.121 15:49:24 -- pm/common@21 -- $ date +%s 00:01:56.121 15:49:24 -- pm/common@21 -- $ date +%s 00:01:56.121 15:49:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.121 15:49:24 -- pm/common@25 -- $ sleep 1 00:01:56.121 15:49:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734274164 00:01:56.121 15:49:24 -- pm/common@21 -- $ date +%s 00:01:56.121 15:49:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734274164 00:01:56.121 15:49:24 -- pm/common@21 -- $ date +%s 00:01:56.121 15:49:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734274164 00:01:56.121 15:49:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734274164 00:01:56.121 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734274164_collect-vmstat.pm.log 00:01:56.121 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734274164_collect-cpu-load.pm.log 00:01:56.121 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734274164_collect-cpu-temp.pm.log 00:01:56.121 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734274164_collect-bmc-pm.bmc.pm.log 00:01:57.068 15:49:25 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:57.068 15:49:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.068 15:49:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.068 15:49:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:57.068 15:49:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.068 Sun Dec 15 02:49:25 PM UTC 2024 00:01:57.068 15:49:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.068 v24.09-1-gb18e1bd62 00:01:57.068 15:49:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:57.068 15:49:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.068 15:49:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.068 15:49:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.068 15:49:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.068 15:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.068 ************************************ 00:01:57.068 START TEST ubsan 00:01:57.068 ************************************ 00:01:57.068 15:49:25 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:57.068 using ubsan 00:01:57.068 00:01:57.068 real 0m0.001s 00:01:57.068 user 0m0.000s 00:01:57.068 sys 0m0.000s 00:01:57.068 15:49:25 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:57.068 15:49:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.068 ************************************ 00:01:57.068 END TEST ubsan 00:01:57.068 ************************************ 00:01:57.068 15:49:25 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:57.068 15:49:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:57.068 15:49:25 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:57.068 15:49:25 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:57.068 15:49:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.068 15:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.068 ************************************ 00:01:57.068 START TEST build_native_dpdk 00:01:57.068 ************************************ 00:01:57.068 15:49:25 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:57.068 eeb0605f11 version: 23.11.0 00:01:57.068 238778122a doc: update release notes for 23.11 00:01:57.068 46aa6b3cfc doc: fix description of RSS features 00:01:57.068 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:57.068 7e421ae345 devtools: support skipping forbid rule check 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:57.068 15:49:25 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:57.328 patching file config/rte_config.h 00:01:57.328 Hunk #1 succeeded at 60 (offset 1 line). 00:01:57.328 15:49:25 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.328 15:49:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:57.329 patching file lib/pcapng/rte_pcapng.c 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:57.329 15:49:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:57.329 15:49:25 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:02.606 The Meson build system 00:02:02.606 Version: 1.5.0 00:02:02.606 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:02.606 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:02:02.606 Build type: native build 00:02:02.606 Program cat found: YES (/usr/bin/cat) 00:02:02.606 Project name: DPDK 00:02:02.606 Project version: 23.11.0 00:02:02.607 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.607 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:02.607 Host machine cpu family: x86_64 00:02:02.607 Host machine cpu: x86_64 00:02:02.607 Message: ## Building in Developer Mode ## 00:02:02.607 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.607 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:02.607 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.607 Program python3 found: YES (/usr/bin/python3) 00:02:02.607 Program cat found: YES (/usr/bin/cat) 00:02:02.607 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:02.607 Compiler for C supports arguments -march=native: YES 00:02:02.607 Checking for size of "void *" : 8 00:02:02.607 Checking for size of "void *" : 8 (cached) 00:02:02.607 Library m found: YES 00:02:02.607 Library numa found: YES 00:02:02.607 Has header "numaif.h" : YES 00:02:02.607 Library fdt found: NO 00:02:02.607 Library execinfo found: NO 00:02:02.607 Has header "execinfo.h" : YES 00:02:02.607 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.607 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.607 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.607 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.607 Run-time dependency openssl found: YES 3.1.1 00:02:02.607 Run-time dependency libpcap found: YES 1.10.4 00:02:02.607 Has header "pcap.h" with dependency libpcap: YES 00:02:02.607 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.607 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.607 Compiler for C supports arguments -Wformat: YES 00:02:02.607 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.607 Compiler for C supports arguments -Wformat-security: NO 00:02:02.607 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.607 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.607 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.607 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.607 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.607 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.607 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.607 Compiler for C supports arguments -Wundef: YES 00:02:02.607 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.607 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.607 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.607 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.607 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.607 Program objdump found: YES (/usr/bin/objdump) 00:02:02.607 Compiler for C supports arguments -mavx512f: YES 00:02:02.607 Checking if "AVX512 checking" compiles: YES 00:02:02.607 Fetching value of define "__SSE4_2__" : 1 00:02:02.607 Fetching value of define "__AES__" : 1 00:02:02.607 Fetching value of define "__AVX__" : 1 00:02:02.607 Fetching value of define "__AVX2__" : 1 00:02:02.607 Fetching value of define "__AVX512BW__" : 1 00:02:02.607 Fetching value of define "__AVX512CD__" : 1 00:02:02.607 Fetching value of define "__AVX512DQ__" : 1 00:02:02.607 Fetching value of define "__AVX512F__" : 1 00:02:02.607 Fetching value of define "__AVX512VL__" : 1 00:02:02.607 Fetching value of define "__PCLMUL__" : 1 00:02:02.607 Fetching value of define "__RDRND__" : 1 00:02:02.607 Fetching value of define "__RDSEED__" : 1 00:02:02.607 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.607 Fetching value of define "__znver1__" : (undefined) 00:02:02.607 Fetching value of define "__znver2__" : (undefined) 00:02:02.607 Fetching value of define "__znver3__" : (undefined) 00:02:02.607 Fetching value of define "__znver4__" : (undefined) 00:02:02.607 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.607 Message: lib/log: Defining dependency "log" 00:02:02.607 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.607 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.607 Checking for function "getentropy" : NO 00:02:02.607 Message: lib/eal: Defining dependency "eal" 00:02:02.607 Message: lib/ring: Defining dependency "ring" 00:02:02.607 Message: lib/rcu: Defining dependency "rcu" 00:02:02.607 Message: lib/mempool: Defining dependency "mempool" 00:02:02.607 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.607 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.607 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.607 Compiler for C supports arguments -mpclmul: YES 00:02:02.607 Compiler for C supports arguments -maes: YES 00:02:02.607 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.607 Compiler for C supports arguments -mavx512bw: YES 00:02:02.607 Compiler for C supports arguments -mavx512dq: YES 00:02:02.607 Compiler for C supports arguments -mavx512vl: YES 00:02:02.607 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.607 Compiler for C supports arguments -mavx2: YES 00:02:02.607 Compiler for C supports arguments -mavx: YES 00:02:02.607 Message: lib/net: Defining dependency "net" 00:02:02.607 Message: lib/meter: Defining dependency "meter" 00:02:02.607 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.607 Message: lib/pci: Defining dependency "pci" 00:02:02.607 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.607 Message: lib/metrics: Defining dependency "metrics" 00:02:02.607 Message: lib/hash: Defining dependency "hash" 00:02:02.607 Message: lib/timer: Defining dependency "timer" 00:02:02.607 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.607 Message: lib/acl: Defining dependency "acl" 00:02:02.607 Message: lib/bbdev: Defining dependency "bbdev" 00:02:02.607 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:02.607 Run-time dependency libelf found: YES 0.191 00:02:02.607 Message: lib/bpf: Defining dependency "bpf" 00:02:02.607 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:02.607 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.607 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.607 Message: lib/distributor: Defining dependency "distributor" 00:02:02.607 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.607 Message: lib/efd: Defining dependency "efd" 00:02:02.607 Message: lib/eventdev: Defining dependency "eventdev" 00:02:02.607 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:02.607 Message: lib/gpudev: Defining dependency "gpudev" 00:02:02.607 Message: lib/gro: Defining dependency "gro" 00:02:02.607 Message: lib/gso: Defining dependency "gso" 00:02:02.607 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:02.607 Message: lib/jobstats: Defining dependency "jobstats" 00:02:02.607 Message: lib/latencystats: Defining dependency "latencystats" 00:02:02.607 Message: lib/lpm: Defining dependency "lpm" 00:02:02.607 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:02.607 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:02.607 Message: lib/member: Defining dependency "member" 00:02:02.607 Message: lib/pcapng: Defining dependency "pcapng" 00:02:02.607 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.607 Message: lib/power: Defining dependency "power" 00:02:02.607 Message: lib/rawdev: Defining dependency "rawdev" 00:02:02.607 Message: lib/regexdev: Defining dependency "regexdev" 00:02:02.607 Message: lib/mldev: Defining dependency "mldev" 00:02:02.607 Message: lib/rib: Defining dependency "rib" 00:02:02.607 Message: lib/reorder: Defining dependency "reorder" 00:02:02.607 Message: lib/sched: Defining dependency "sched" 00:02:02.607 Message: lib/security: Defining dependency "security" 00:02:02.607 Message: lib/stack: Defining dependency "stack" 00:02:02.607 Has header "linux/userfaultfd.h" : YES 00:02:02.607 Has header "linux/vduse.h" : YES 00:02:02.607 Message: lib/vhost: Defining dependency "vhost" 00:02:02.607 Message: lib/ipsec: Defining dependency "ipsec" 00:02:02.607 Message: lib/pdcp: Defining dependency "pdcp" 00:02:02.607 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.607 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.607 Message: lib/fib: Defining dependency "fib" 00:02:02.607 Message: lib/port: Defining dependency "port" 00:02:02.607 Message: lib/pdump: Defining dependency "pdump" 00:02:02.607 Message: lib/table: Defining dependency "table" 00:02:02.607 Message: lib/pipeline: Defining dependency "pipeline" 00:02:02.607 Message: lib/graph: Defining dependency "graph" 00:02:02.607 Message: lib/node: Defining dependency "node" 00:02:02.607 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.559 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.559 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.559 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.559 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:03.559 Compiler for C supports arguments -Wno-unused-value: YES 00:02:03.559 Compiler for C supports arguments -Wno-format: YES 00:02:03.559 Compiler for C supports arguments -Wno-format-security: YES 00:02:03.559 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:03.559 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:03.559 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:03.559 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:03.559 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.559 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.559 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.559 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.559 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:03.559 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:03.559 Has header "sys/epoll.h" : YES 00:02:03.559 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.559 Configuring doxy-api-html.conf using configuration 00:02:03.559 Configuring doxy-api-man.conf using configuration 00:02:03.559 Program mandb found: YES (/usr/bin/mandb) 00:02:03.559 Program sphinx-build found: NO 00:02:03.559 Configuring rte_build_config.h using configuration 00:02:03.559 Message: 00:02:03.559 ================= 00:02:03.559 Applications Enabled 00:02:03.559 ================= 00:02:03.559 00:02:03.559 apps: 00:02:03.559 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:03.559 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:03.560 test-pmd, test-regex, test-sad, test-security-perf, 00:02:03.560 00:02:03.560 Message: 00:02:03.560 ================= 00:02:03.560 Libraries Enabled 00:02:03.560 ================= 00:02:03.560 00:02:03.560 libs: 00:02:03.560 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.560 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:03.560 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:03.560 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:03.560 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:03.560 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:03.560 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:03.560 00:02:03.560 00:02:03.560 Message: 00:02:03.560 =============== 00:02:03.560 Drivers Enabled 00:02:03.560 =============== 00:02:03.560 00:02:03.560 common: 00:02:03.560 00:02:03.560 bus: 00:02:03.560 pci, vdev, 00:02:03.560 mempool: 00:02:03.560 ring, 00:02:03.560 dma: 00:02:03.560 00:02:03.560 net: 00:02:03.560 i40e, 00:02:03.560 raw: 00:02:03.560 00:02:03.560 crypto: 00:02:03.560 00:02:03.560 compress: 00:02:03.560 00:02:03.560 regex: 00:02:03.560 00:02:03.560 ml: 00:02:03.560 00:02:03.560 vdpa: 00:02:03.560 00:02:03.560 event: 00:02:03.560 00:02:03.560 baseband: 00:02:03.560 00:02:03.560 gpu: 00:02:03.560 00:02:03.560 00:02:03.560 Message: 00:02:03.560 ================= 00:02:03.560 Content Skipped 00:02:03.560 ================= 00:02:03.560 00:02:03.560 apps: 00:02:03.560 00:02:03.560 libs: 00:02:03.560 00:02:03.560 drivers: 00:02:03.560 common/cpt: not in enabled drivers build config 00:02:03.560 common/dpaax: not in enabled drivers build config 00:02:03.560 common/iavf: not in enabled drivers build config 00:02:03.560 common/idpf: not in enabled drivers build config 00:02:03.560 common/mvep: not in enabled drivers build config 00:02:03.560 common/octeontx: not in enabled drivers build config 00:02:03.560 bus/auxiliary: not in enabled drivers build config 00:02:03.560 bus/cdx: not in enabled drivers build config 00:02:03.560 bus/dpaa: not in enabled drivers build config 00:02:03.560 bus/fslmc: not in enabled drivers build config 00:02:03.560 bus/ifpga: not in enabled drivers build config 00:02:03.560 bus/platform: not in enabled drivers build config 00:02:03.560 bus/vmbus: not in enabled drivers build config 00:02:03.560 common/cnxk: not in enabled drivers build config 00:02:03.560 common/mlx5: not in enabled drivers build config 00:02:03.560 common/nfp: not in enabled drivers build config 00:02:03.560 common/qat: not in enabled drivers build config 00:02:03.560 common/sfc_efx: not in enabled drivers build config 00:02:03.560 mempool/bucket: not in enabled drivers build config 00:02:03.560 mempool/cnxk: not in enabled drivers build config 00:02:03.560 mempool/dpaa: not in enabled drivers build config 00:02:03.560 mempool/dpaa2: not in enabled drivers build config 00:02:03.560 mempool/octeontx: not in enabled drivers build config 00:02:03.560 mempool/stack: not in enabled drivers build config 00:02:03.560 dma/cnxk: not in enabled drivers build config 00:02:03.560 dma/dpaa: not in enabled drivers build config 00:02:03.560 dma/dpaa2: not in enabled drivers build config 00:02:03.560 dma/hisilicon: not in enabled drivers build config 00:02:03.560 dma/idxd: not in enabled drivers build config 00:02:03.560 dma/ioat: not in enabled drivers build config 00:02:03.560 dma/skeleton: not in enabled drivers build config 00:02:03.560 net/af_packet: not in enabled drivers build config 00:02:03.560 net/af_xdp: not in enabled drivers build config 00:02:03.560 net/ark: not in enabled drivers build config 00:02:03.560 net/atlantic: not in enabled drivers build config 00:02:03.560 net/avp: not in enabled drivers build config 00:02:03.560 net/axgbe: not in enabled drivers build config 00:02:03.560 net/bnx2x: not in enabled drivers build config 00:02:03.560 net/bnxt: not in enabled drivers build config 00:02:03.560 net/bonding: not in enabled drivers build config 00:02:03.560 net/cnxk: not in enabled drivers build config 00:02:03.560 net/cpfl: not in enabled drivers build config 00:02:03.560 net/cxgbe: not in enabled drivers build config 00:02:03.560 net/dpaa: not in enabled drivers build config 00:02:03.560 net/dpaa2: not in enabled drivers build config 00:02:03.560 net/e1000: not in enabled drivers build config 00:02:03.560 net/ena: not in enabled drivers build config 00:02:03.560 net/enetc: not in enabled drivers build config 00:02:03.560 net/enetfec: not in enabled drivers build config 00:02:03.560 net/enic: not in enabled drivers build config 00:02:03.560 net/failsafe: not in enabled drivers build config 00:02:03.560 net/fm10k: not in enabled drivers build config 00:02:03.560 net/gve: not in enabled drivers build config 00:02:03.560 net/hinic: not in enabled drivers build config 00:02:03.560 net/hns3: not in enabled drivers build config 00:02:03.560 net/iavf: not in enabled drivers build config 00:02:03.560 net/ice: not in enabled drivers build config 00:02:03.560 net/idpf: not in enabled drivers build config 00:02:03.560 net/igc: not in enabled drivers build config 00:02:03.560 net/ionic: not in enabled drivers build config 00:02:03.560 net/ipn3ke: not in enabled drivers build config 00:02:03.560 net/ixgbe: not in enabled drivers build config 00:02:03.560 net/mana: not in enabled drivers build config 00:02:03.560 net/memif: not in enabled drivers build config 00:02:03.560 net/mlx4: not in enabled drivers build config 00:02:03.560 net/mlx5: not in enabled drivers build config 00:02:03.560 net/mvneta: not in enabled drivers build config 00:02:03.560 net/mvpp2: not in enabled drivers build config 00:02:03.560 net/netvsc: not in enabled drivers build config 00:02:03.560 net/nfb: not in enabled drivers build config 00:02:03.560 net/nfp: not in enabled drivers build config 00:02:03.560 net/ngbe: not in enabled drivers build config 00:02:03.560 net/null: not in enabled drivers build config 00:02:03.560 net/octeontx: not in enabled drivers build config 00:02:03.560 net/octeon_ep: not in enabled drivers build config 00:02:03.560 net/pcap: not in enabled drivers build config 00:02:03.560 net/pfe: not in enabled drivers build config 00:02:03.560 net/qede: not in enabled drivers build config 00:02:03.560 net/ring: not in enabled drivers build config 00:02:03.560 net/sfc: not in enabled drivers build config 00:02:03.560 net/softnic: not in enabled drivers build config 00:02:03.560 net/tap: not in enabled drivers build config 00:02:03.560 net/thunderx: not in enabled drivers build config 00:02:03.560 net/txgbe: not in enabled drivers build config 00:02:03.560 net/vdev_netvsc: not in enabled drivers build config 00:02:03.560 net/vhost: not in enabled drivers build config 00:02:03.560 net/virtio: not in enabled drivers build config 00:02:03.560 net/vmxnet3: not in enabled drivers build config 00:02:03.560 raw/cnxk_bphy: not in enabled drivers build config 00:02:03.560 raw/cnxk_gpio: not in enabled drivers build config 00:02:03.560 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:03.560 raw/ifpga: not in enabled drivers build config 00:02:03.560 raw/ntb: not in enabled drivers build config 00:02:03.560 raw/skeleton: not in enabled drivers build config 00:02:03.560 crypto/armv8: not in enabled drivers build config 00:02:03.560 crypto/bcmfs: not in enabled drivers build config 00:02:03.560 crypto/caam_jr: not in enabled drivers build config 00:02:03.560 crypto/ccp: not in enabled drivers build config 00:02:03.560 crypto/cnxk: not in enabled drivers build config 00:02:03.560 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.560 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.560 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.560 crypto/mlx5: not in enabled drivers build config 00:02:03.560 crypto/mvsam: not in enabled drivers build config 00:02:03.560 crypto/nitrox: not in enabled drivers build config 00:02:03.560 crypto/null: not in enabled drivers build config 00:02:03.560 crypto/octeontx: not in enabled drivers build config 00:02:03.560 crypto/openssl: not in enabled drivers build config 00:02:03.560 crypto/scheduler: not in enabled drivers build config 00:02:03.560 crypto/uadk: not in enabled drivers build config 00:02:03.560 crypto/virtio: not in enabled drivers build config 00:02:03.560 compress/isal: not in enabled drivers build config 00:02:03.560 compress/mlx5: not in enabled drivers build config 00:02:03.560 compress/octeontx: not in enabled drivers build config 00:02:03.560 compress/zlib: not in enabled drivers build config 00:02:03.560 regex/mlx5: not in enabled drivers build config 00:02:03.560 regex/cn9k: not in enabled drivers build config 00:02:03.560 ml/cnxk: not in enabled drivers build config 00:02:03.560 vdpa/ifc: not in enabled drivers build config 00:02:03.560 vdpa/mlx5: not in enabled drivers build config 00:02:03.560 vdpa/nfp: not in enabled drivers build config 00:02:03.560 vdpa/sfc: not in enabled drivers build config 00:02:03.560 event/cnxk: not in enabled drivers build config 00:02:03.560 event/dlb2: not in enabled drivers build config 00:02:03.560 event/dpaa: not in enabled drivers build config 00:02:03.560 event/dpaa2: not in enabled drivers build config 00:02:03.560 event/dsw: not in enabled drivers build config 00:02:03.560 event/opdl: not in enabled drivers build config 00:02:03.560 event/skeleton: not in enabled drivers build config 00:02:03.560 event/sw: not in enabled drivers build config 00:02:03.560 event/octeontx: not in enabled drivers build config 00:02:03.560 baseband/acc: not in enabled drivers build config 00:02:03.560 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:03.560 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:03.560 baseband/la12xx: not in enabled drivers build config 00:02:03.560 baseband/null: not in enabled drivers build config 00:02:03.560 baseband/turbo_sw: not in enabled drivers build config 00:02:03.560 gpu/cuda: not in enabled drivers build config 00:02:03.560 00:02:03.560 00:02:03.560 Build targets in project: 217 00:02:03.560 00:02:03.560 DPDK 23.11.0 00:02:03.560 00:02:03.560 User defined options 00:02:03.560 libdir : lib 00:02:03.560 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:03.560 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:03.560 c_link_args : 00:02:03.560 enable_docs : false 00:02:03.560 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.560 enable_kmods : false 00:02:03.560 machine : native 00:02:03.561 tests : false 00:02:03.561 00:02:03.561 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.561 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:03.561 15:49:31 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:03.561 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:03.561 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.561 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.824 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.824 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.824 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.824 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.824 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.824 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.824 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.824 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.824 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.824 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.824 [13/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.824 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.824 [15/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.824 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.824 [17/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.824 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.824 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.824 [20/707] Linking static target lib/librte_kvargs.a 00:02:03.824 [21/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.824 [22/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.824 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.824 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.824 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.824 [26/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.824 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.824 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.824 [29/707] Linking static target lib/librte_pci.a 00:02:03.824 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.824 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.824 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.084 [33/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.084 [34/707] Linking static target lib/librte_log.a 00:02:04.084 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.084 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.346 [37/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.346 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.346 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.346 [40/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.346 [41/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.346 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.346 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.346 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.346 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.346 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.346 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.346 [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.346 [49/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.346 [50/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.346 [51/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.346 [52/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.346 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.346 [54/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.346 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.346 [56/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.346 [57/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.346 [58/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.346 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.346 [60/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.346 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.346 [62/707] Linking static target lib/librte_meter.a 00:02:04.346 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.346 [64/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.346 [65/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.346 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.346 [67/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.346 [68/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.346 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.346 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.346 [71/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.346 [72/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:04.346 [73/707] Linking static target lib/librte_ring.a 00:02:04.346 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.346 [75/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.346 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.346 [77/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.346 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.346 [79/707] Linking static target lib/librte_cmdline.a 00:02:04.346 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.612 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.612 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.612 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.612 [84/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.612 [85/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.612 [86/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:04.612 [87/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.612 [88/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.612 [89/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:04.612 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.612 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.612 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.612 [93/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:04.612 [94/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.612 [95/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.612 [96/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:04.612 [97/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:04.612 [98/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.612 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.612 [100/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:04.612 [101/707] Linking static target lib/librte_metrics.a 00:02:04.612 [102/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.612 [103/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:04.612 [104/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.612 [105/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:04.612 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.612 [107/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:04.612 [108/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.612 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.612 [110/707] Linking static target lib/librte_net.a 00:02:04.612 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.612 [112/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:04.612 [113/707] Linking static target lib/librte_bitratestats.a 00:02:04.612 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.612 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.612 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:04.612 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.612 [118/707] Linking static target lib/librte_cfgfile.a 00:02:04.612 [119/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.612 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.873 [121/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.873 [122/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.873 [123/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.873 [124/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [125/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.873 [126/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.873 [127/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:04.873 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.873 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.873 [131/707] Linking target lib/librte_log.so.24.0 00:02:04.873 [132/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:04.873 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.873 [134/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [135/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:04.873 [136/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.873 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:04.873 [138/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:04.873 [139/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.873 [140/707] Linking static target lib/librte_timer.a 00:02:04.873 [141/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:04.873 [142/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:04.873 [143/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.135 [144/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.135 [145/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.135 [146/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.135 [147/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.135 [148/707] Linking static target lib/librte_mempool.a 00:02:05.135 [149/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:05.135 [150/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:05.135 [151/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:05.135 [152/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.135 [153/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.135 [154/707] Linking static target lib/librte_bbdev.a 00:02:05.135 [155/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:05.135 [156/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:05.135 [157/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.135 [158/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.135 [159/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.135 [160/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.135 [161/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:05.135 [162/707] Linking target lib/librte_kvargs.so.24.0 00:02:05.135 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:05.135 [164/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:05.135 [165/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:05.135 [166/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:05.135 [167/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:05.135 [168/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.135 [169/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:05.135 [170/707] Linking static target lib/librte_jobstats.a 00:02:05.135 [171/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:05.135 [172/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:05.135 [173/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.135 [174/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.135 [175/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.135 [176/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:05.135 [177/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:05.135 [178/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.135 [179/707] Linking static target lib/librte_compressdev.a 00:02:05.135 [180/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.395 [181/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:05.395 [182/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:05.395 [183/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.395 [184/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:05.395 [185/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:05.395 [186/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:05.395 [187/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:05.395 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:05.395 [189/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:05.395 [190/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:05.395 [191/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:05.395 [192/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:05.395 [193/707] Linking static target lib/librte_dispatcher.a 00:02:05.395 [194/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.395 [195/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:05.395 [196/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:05.395 [197/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.395 [198/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:05.395 [199/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.395 [200/707] Linking static target lib/librte_latencystats.a 00:02:05.395 [201/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.395 [202/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:05.395 [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:05.395 [204/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.395 [205/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:05.395 [206/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:05.395 [207/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:05.395 [208/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.395 [209/707] Linking static target lib/librte_rcu.a 00:02:05.395 [210/707] Linking static target lib/librte_gpudev.a 00:02:05.395 [211/707] Linking static target lib/librte_eal.a 00:02:05.395 [212/707] Linking static target lib/librte_telemetry.a 00:02:05.395 [213/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:05.395 [214/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:05.395 [215/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:05.395 [216/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.395 [217/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:05.395 [218/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:05.395 [219/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:05.395 [220/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.395 [221/707] Linking static target lib/librte_stack.a 00:02:05.395 [222/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.656 [223/707] Linking static target lib/librte_gro.a 00:02:05.656 [224/707] Linking static target lib/librte_dmadev.a 00:02:05.656 [225/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:05.656 [226/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.656 [227/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:05.656 [228/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:05.656 [229/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:05.656 [230/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:05.656 [231/707] Linking static target lib/librte_regexdev.a 00:02:05.656 [232/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.656 [233/707] Linking static target lib/librte_gso.a 00:02:05.656 [234/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:05.656 [235/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:05.656 [236/707] Linking static target lib/librte_distributor.a 00:02:05.656 [237/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:05.656 [238/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:05.656 [239/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:05.656 [240/707] Linking static target lib/librte_mldev.a 00:02:05.656 [241/707] Linking static target lib/librte_rawdev.a 00:02:05.656 [242/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:05.656 [243/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.656 [244/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.656 [245/707] Linking static target lib/librte_mbuf.a 00:02:05.656 [246/707] Linking static target lib/librte_power.a 00:02:05.656 [247/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.656 [248/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:05.656 [249/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:05.656 [250/707] Linking static target lib/librte_ip_frag.a 00:02:05.918 [251/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:05.918 [252/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.918 [253/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:05.918 [254/707] Linking static target lib/librte_pcapng.a 00:02:05.918 [255/707] Linking static target lib/librte_reorder.a 00:02:05.918 [256/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:05.918 [257/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:05.918 [258/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:05.918 [259/707] Linking static target lib/librte_bpf.a 00:02:05.918 [260/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:05.918 [261/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.918 [262/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:05.918 [263/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.918 [264/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:05.918 [265/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.919 [266/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.919 [267/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.919 [268/707] Linking static target lib/librte_security.a 00:02:05.919 [269/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:05.919 [270/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.919 [271/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.919 [272/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:05.919 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.919 [274/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.919 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.919 [276/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:05.919 [277/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.919 [278/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:05.919 [279/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:05.919 [280/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:05.919 [281/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:06.180 [282/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:06.180 [283/707] Linking static target lib/librte_lpm.a 00:02:06.180 [284/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [285/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [286/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:06.180 [287/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:06.180 [288/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:06.180 [289/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:06.180 [290/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [291/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [292/707] Linking static target lib/librte_rib.a 00:02:06.180 [293/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:06.180 [294/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:06.180 [295/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:06.180 [296/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [297/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [298/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [299/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:06.180 [300/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:06.180 [301/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [302/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.180 [303/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.180 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:06.180 [305/707] Linking target lib/librte_telemetry.so.24.0 00:02:06.180 [306/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:06.441 [307/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:06.441 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:06.441 [309/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:06.441 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:06.441 [311/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:06.441 [312/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:06.441 [313/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:06.441 [314/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:06.441 [315/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.441 [316/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.441 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:06.441 [318/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:06.441 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:06.441 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:06.441 [321/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:06.441 [322/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:06.441 [323/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:06.441 [324/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:06.441 [325/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:06.441 [326/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.441 [327/707] Linking static target lib/librte_efd.a 00:02:06.441 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:06.441 [329/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:06.441 [330/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:06.441 [331/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:06.441 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:06.441 [333/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:06.712 [334/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:06.712 [335/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:06.712 [336/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:06.712 [337/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:06.712 [338/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.712 [339/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.712 [340/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:06.712 [341/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.712 [342/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:06.712 [343/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:06.712 [344/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:06.712 [345/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:06.712 [346/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:06.712 [347/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:06.712 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:06.712 [349/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.712 [350/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:06.712 [351/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:06.712 [352/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:06.973 [353/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:06.973 [354/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:06.973 [355/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:06.973 [356/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:06.973 [357/707] Linking static target lib/librte_fib.a 00:02:06.973 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.973 [359/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:06.973 [360/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:06.973 [361/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.973 [362/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.973 [363/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:06.973 [364/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:06.973 [365/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:06.973 [366/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.973 [367/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:06.973 [368/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:06.973 [369/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:06.973 [370/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:06.973 [371/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.973 [372/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:06.973 [373/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.973 [374/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:06.973 [375/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.973 [376/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:06.973 [377/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:06.973 [378/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.973 [379/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.234 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.234 [381/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.234 [382/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:07.234 [383/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:07.234 [384/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:07.234 [385/707] Linking static target lib/librte_graph.a 00:02:07.234 [386/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:07.234 [387/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:07.234 [388/707] Linking static target lib/librte_pdump.a 00:02:07.234 [389/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:07.234 [390/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:07.234 [391/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:07.234 [392/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:07.234 [393/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:07.234 [394/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:07.234 [395/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.234 [396/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:07.234 [397/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:07.234 [398/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:07.234 [399/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:07.234 [400/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.234 [401/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:07.234 [402/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:07.234 [403/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:07.502 [404/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:07.502 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:07.502 [406/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:07.502 [407/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:07.502 [408/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.502 [409/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:07.502 [410/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:07.502 [411/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.502 [412/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.502 [413/707] Linking static target lib/librte_table.a 00:02:07.502 [414/707] Linking static target drivers/librte_bus_vdev.a 00:02:07.502 [415/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:07.502 [416/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:07.502 [417/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:07.502 [418/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.502 [419/707] Linking static target lib/librte_sched.a 00:02:07.502 [420/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:07.502 [421/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:07.502 [422/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:07.502 [423/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:07.502 [424/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:07.502 [425/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:07.502 [426/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:07.502 [427/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:07.502 [428/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:07.502 [429/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:07.764 [430/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.764 [431/707] Linking static target lib/librte_cryptodev.a 00:02:07.764 [432/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:07.764 [433/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.764 [434/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:07.764 [435/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.764 [436/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:07.764 [437/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:07.764 [438/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.764 [439/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:07.764 [440/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.764 [441/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:07.764 [442/707] Linking static target drivers/librte_bus_pci.a 00:02:07.764 [443/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:07.764 [444/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:07.764 [445/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:07.764 [446/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:07.764 [447/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:07.764 [448/707] Linking static target lib/librte_ipsec.a 00:02:07.764 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:07.764 [450/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.024 [451/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.024 [452/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:08.024 [453/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:08.024 [454/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:08.024 [455/707] Linking static target lib/librte_member.a 00:02:08.024 [456/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:08.024 [457/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:08.024 [458/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:08.024 [459/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:08.024 [460/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:08.024 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:08.024 [462/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:08.024 [463/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.024 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:08.024 [465/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.024 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:08.024 [467/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:08.024 [468/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:08.024 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:08.024 [470/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:08.024 [471/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:08.024 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:08.024 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:08.024 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:08.024 [475/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:08.024 [476/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:08.024 [477/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:08.024 [478/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:08.024 [479/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:08.024 [480/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:08.024 [481/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.024 [482/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:08.024 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:08.024 [484/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:08.024 [485/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:08.024 [486/707] Linking static target lib/librte_node.a 00:02:08.282 [487/707] Linking static target lib/librte_pdcp.a 00:02:08.282 [488/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:08.282 [489/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.282 [490/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:08.282 [491/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:08.282 [492/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.282 [493/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:08.282 [494/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:08.282 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:08.282 [496/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.282 [497/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.282 [498/707] Linking static target drivers/librte_mempool_ring.a 00:02:08.282 [499/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:08.282 [500/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:08.282 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:08.283 [502/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.283 [503/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.283 [504/707] Linking static target lib/librte_hash.a 00:02:08.283 [505/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:08.283 [506/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.283 [507/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:08.283 [508/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:08.283 [509/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:08.283 [510/707] Linking static target lib/acl/libavx2_tmp.a 00:02:08.283 [511/707] Linking static target lib/librte_port.a 00:02:08.283 [512/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:08.283 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:08.283 [514/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:08.541 [515/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:08.541 [516/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.541 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:08.541 [518/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:08.541 [519/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:08.541 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:08.541 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:08.541 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:08.541 [523/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:08.541 [524/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:08.541 [525/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:08.541 [526/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.541 [527/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:08.541 [528/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.541 [529/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.541 [530/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:08.541 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:08.541 [532/707] Linking static target lib/librte_eventdev.a 00:02:08.541 [533/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:08.541 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:08.541 [535/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:08.541 [536/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:08.541 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:08.541 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:08.541 [539/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:08.541 [540/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:08.541 [541/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:08.541 [542/707] Linking static target lib/librte_acl.a 00:02:08.541 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:08.541 [544/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:08.541 [545/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:08.799 [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:08.799 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:08.799 [548/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:08.799 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:08.799 [550/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:08.799 [551/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:08.799 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:08.799 [553/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:08.799 [554/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:08.799 [555/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:08.799 [556/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:08.799 [557/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:08.799 [558/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:08.799 [559/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:08.799 [560/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:08.799 [561/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:09.056 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:09.056 [563/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.056 [564/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.056 [565/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.056 [566/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:09.056 [567/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:09.056 [568/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:09.314 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:09.314 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:09.314 [571/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.314 [572/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:09.314 [573/707] Linking static target lib/librte_ethdev.a 00:02:09.314 [574/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.572 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:09.830 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:09.830 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:10.088 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:10.088 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:10.346 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:10.912 [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:10.912 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:10.912 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:11.170 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:11.170 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.171 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.171 [587/707] Linking static target drivers/librte_net_i40e.a 00:02:11.429 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.365 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.365 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:12.365 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.932 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:18.207 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.207 [594/707] Linking target lib/librte_eal.so.24.0 00:02:18.207 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:18.207 [596/707] Linking target lib/librte_ring.so.24.0 00:02:18.207 [597/707] Linking target lib/librte_cfgfile.so.24.0 00:02:18.207 [598/707] Linking target lib/librte_pci.so.24.0 00:02:18.207 [599/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:18.207 [600/707] Linking target lib/librte_meter.so.24.0 00:02:18.207 [601/707] Linking target lib/librte_timer.so.24.0 00:02:18.207 [602/707] Linking target lib/librte_rawdev.so.24.0 00:02:18.207 [603/707] Linking target lib/librte_jobstats.so.24.0 00:02:18.207 [604/707] Linking target lib/librte_stack.so.24.0 00:02:18.207 [605/707] Linking target lib/librte_dmadev.so.24.0 00:02:18.207 [606/707] Linking target lib/librte_acl.so.24.0 00:02:18.207 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:18.207 [608/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.207 [609/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:18.207 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:18.207 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:18.207 [612/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:18.207 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:18.207 [614/707] Linking target lib/librte_mempool.so.24.0 00:02:18.207 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:18.207 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:18.207 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:18.207 [618/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.467 [619/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:18.467 [620/707] Linking target lib/librte_mbuf.so.24.0 00:02:18.467 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:18.467 [622/707] Linking target lib/librte_rib.so.24.0 00:02:18.467 [623/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:18.467 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:18.467 [625/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:18.467 [626/707] Linking target lib/librte_bbdev.so.24.0 00:02:18.467 [627/707] Linking target lib/librte_distributor.so.24.0 00:02:18.467 [628/707] Linking target lib/librte_net.so.24.0 00:02:18.467 [629/707] Linking target lib/librte_compressdev.so.24.0 00:02:18.467 [630/707] Linking target lib/librte_regexdev.so.24.0 00:02:18.467 [631/707] Linking target lib/librte_fib.so.24.0 00:02:18.467 [632/707] Linking target lib/librte_mldev.so.24.0 00:02:18.467 [633/707] Linking target lib/librte_cryptodev.so.24.0 00:02:18.467 [634/707] Linking target lib/librte_gpudev.so.24.0 00:02:18.467 [635/707] Linking target lib/librte_reorder.so.24.0 00:02:18.467 [636/707] Linking target lib/librte_sched.so.24.0 00:02:18.726 [637/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:18.726 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:18.726 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:18.726 [640/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:18.726 [641/707] Linking target lib/librte_cmdline.so.24.0 00:02:18.726 [642/707] Linking target lib/librte_hash.so.24.0 00:02:18.726 [643/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:18.726 [644/707] Linking target lib/librte_security.so.24.0 00:02:18.726 [645/707] Linking target lib/librte_ethdev.so.24.0 00:02:18.726 [646/707] Linking static target lib/librte_pipeline.a 00:02:18.726 [647/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:18.985 [648/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:18.985 [649/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:18.985 [650/707] Linking target lib/librte_pdcp.so.24.0 00:02:18.985 [651/707] Linking target lib/librte_member.so.24.0 00:02:18.985 [652/707] Linking target lib/librte_efd.so.24.0 00:02:18.985 [653/707] Linking target lib/librte_lpm.so.24.0 00:02:18.985 [654/707] Linking target lib/librte_ipsec.so.24.0 00:02:18.985 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:18.985 [656/707] Linking target lib/librte_pcapng.so.24.0 00:02:18.985 [657/707] Linking target lib/librte_gso.so.24.0 00:02:18.985 [658/707] Linking target lib/librte_gro.so.24.0 00:02:18.985 [659/707] Linking target lib/librte_bpf.so.24.0 00:02:18.985 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:18.985 [661/707] Linking target lib/librte_power.so.24.0 00:02:18.985 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:18.985 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:18.985 [664/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:18.985 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:18.985 [666/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:18.985 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:18.985 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:18.985 [669/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:18.985 [670/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:18.985 [671/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.244 [672/707] Linking target lib/librte_graph.so.24.0 00:02:19.244 [673/707] Linking static target lib/librte_vhost.a 00:02:19.244 [674/707] Linking target lib/librte_dispatcher.so.24.0 00:02:19.244 [675/707] Linking target lib/librte_bitratestats.so.24.0 00:02:19.244 [676/707] Linking target lib/librte_latencystats.so.24.0 00:02:19.244 [677/707] Linking target lib/librte_pdump.so.24.0 00:02:19.244 [678/707] Linking target lib/librte_port.so.24.0 00:02:19.244 [679/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:19.244 [680/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:19.244 [681/707] Linking target lib/librte_node.so.24.0 00:02:19.244 [682/707] Linking target lib/librte_table.so.24.0 00:02:19.502 [683/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:19.502 [684/707] Linking target app/dpdk-test-acl 00:02:19.502 [685/707] Linking target app/dpdk-test-cmdline 00:02:19.502 [686/707] Linking target app/dpdk-test-gpudev 00:02:19.502 [687/707] Linking target app/dpdk-test-pipeline 00:02:19.502 [688/707] Linking target app/dpdk-pdump 00:02:19.502 [689/707] Linking target app/dpdk-test-fib 00:02:19.502 [690/707] Linking target app/dpdk-test-flow-perf 00:02:19.502 [691/707] Linking target app/dpdk-dumpcap 00:02:19.502 [692/707] Linking target app/dpdk-test-security-perf 00:02:19.502 [693/707] Linking target app/dpdk-test-sad 00:02:19.502 [694/707] Linking target app/dpdk-test-dma-perf 00:02:19.502 [695/707] Linking target app/dpdk-proc-info 00:02:19.502 [696/707] Linking target app/dpdk-test-mldev 00:02:19.502 [697/707] Linking target app/dpdk-test-crypto-perf 00:02:19.502 [698/707] Linking target app/dpdk-graph 00:02:19.502 [699/707] Linking target app/dpdk-test-compress-perf 00:02:19.503 [700/707] Linking target app/dpdk-test-bbdev 00:02:19.503 [701/707] Linking target app/dpdk-test-regex 00:02:19.503 [702/707] Linking target app/dpdk-test-eventdev 00:02:19.761 [703/707] Linking target app/dpdk-testpmd 00:02:21.214 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.473 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:24.009 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.268 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:24.268 15:49:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:24.268 15:49:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:24.268 15:49:52 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:24.268 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:24.268 [0/1] Installing files. 00:02:24.532 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.532 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.533 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.534 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.535 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.536 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.537 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.537 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.537 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:24.801 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:24.801 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:24.801 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:24.801 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:24.801 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.801 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.802 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.803 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.804 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:24.805 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:24.805 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:24.805 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:02:24.805 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:24.805 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:24.805 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:24.805 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:24.805 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:24.805 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:24.805 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:24.805 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:24.805 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:24.805 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:24.805 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:24.805 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:24.805 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:24.806 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:24.806 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:24.806 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:24.806 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:24.806 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:24.806 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:24.806 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:24.806 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:24.806 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:24.806 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:24.806 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:24.806 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:24.806 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:24.806 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:24.806 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:24.806 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:24.806 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:24.806 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:24.806 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:24.806 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:24.806 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:24.806 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:24.806 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:24.806 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:24.806 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:24.806 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:24.806 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:24.806 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:24.806 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:24.806 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:24.806 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:24.806 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:24.806 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:24.806 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:24.806 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:24.806 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:24.806 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:24.806 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:24.806 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:24.806 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:24.806 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:24.806 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:24.806 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:24.806 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:24.806 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:24.806 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:24.806 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:24.806 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:24.806 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:24.806 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:24.806 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:24.806 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:24.806 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:24.806 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:24.806 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:24.806 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:24.806 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:24.806 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:24.806 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:24.806 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:24.806 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:24.806 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:24.806 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:24.806 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:24.806 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:24.806 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:24.806 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:24.806 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:24.806 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:24.806 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:24.806 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:24.806 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:24.806 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:24.806 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:24.806 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:24.806 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:24.806 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:24.806 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:24.806 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:24.806 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:24.806 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:24.806 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:24.806 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:24.806 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:24.806 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:24.806 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:24.806 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:24.806 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:24.806 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:24.806 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:24.806 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:24.806 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:24.806 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:24.806 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:24.806 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:24.806 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:24.806 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:24.806 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:24.806 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:24.806 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:24.806 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:24.806 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:24.807 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:24.807 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:24.807 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:24.807 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:24.807 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:24.807 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:24.807 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:24.807 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:24.807 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:24.807 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:24.807 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:24.807 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:24.807 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:24.807 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:24.807 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:24.807 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:25.066 15:49:53 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:25.066 15:49:53 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:25.066 00:02:25.066 real 0m27.796s 00:02:25.066 user 8m5.701s 00:02:25.066 sys 2m40.130s 00:02:25.066 15:49:53 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.066 15:49:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:25.066 ************************************ 00:02:25.066 END TEST build_native_dpdk 00:02:25.066 ************************************ 00:02:25.066 15:49:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.066 15:49:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.066 15:49:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.066 15:49:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.066 15:49:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.066 15:49:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.066 15:49:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.067 15:49:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:25.067 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:25.326 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:25.326 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:25.326 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:25.586 Using 'verbs' RDMA provider 00:02:41.414 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:53.623 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:53.623 Creating mk/config.mk...done. 00:02:53.623 Creating mk/cc.flags.mk...done. 00:02:53.623 Type 'make' to build. 00:02:53.623 15:50:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:53.623 15:50:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:53.623 15:50:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:53.623 15:50:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.623 ************************************ 00:02:53.623 START TEST make 00:02:53.623 ************************************ 00:02:53.623 15:50:21 make -- common/autotest_common.sh@1125 -- $ make -j112 00:02:53.623 make[1]: Nothing to be done for 'all'. 00:03:25.708 CC lib/log/log.o 00:03:25.708 CC lib/log/log_flags.o 00:03:25.708 CC lib/log/log_deprecated.o 00:03:25.708 CC lib/ut/ut.o 00:03:25.708 CC lib/ut_mock/mock.o 00:03:25.708 LIB libspdk_log.a 00:03:25.708 LIB libspdk_ut.a 00:03:25.708 LIB libspdk_ut_mock.a 00:03:25.708 SO libspdk_log.so.7.0 00:03:25.708 SO libspdk_ut_mock.so.6.0 00:03:25.708 SO libspdk_ut.so.2.0 00:03:25.708 SYMLINK libspdk_log.so 00:03:25.708 SYMLINK libspdk_ut_mock.so 00:03:25.708 SYMLINK libspdk_ut.so 00:03:25.708 CC lib/ioat/ioat.o 00:03:25.708 CXX lib/trace_parser/trace.o 00:03:25.708 CC lib/dma/dma.o 00:03:25.708 CC lib/util/base64.o 00:03:25.708 CC lib/util/bit_array.o 00:03:25.708 CC lib/util/cpuset.o 00:03:25.708 CC lib/util/crc16.o 00:03:25.708 CC lib/util/crc32.o 00:03:25.708 CC lib/util/crc32c.o 00:03:25.708 CC lib/util/crc32_ieee.o 00:03:25.708 CC lib/util/crc64.o 00:03:25.708 CC lib/util/dif.o 00:03:25.708 CC lib/util/fd.o 00:03:25.708 CC lib/util/fd_group.o 00:03:25.708 CC lib/util/file.o 00:03:25.708 CC lib/util/hexlify.o 00:03:25.708 CC lib/util/iov.o 00:03:25.708 CC lib/util/math.o 00:03:25.708 CC lib/util/net.o 00:03:25.708 CC lib/util/pipe.o 00:03:25.708 CC lib/util/strerror_tls.o 00:03:25.708 CC lib/util/string.o 00:03:25.708 CC lib/util/uuid.o 00:03:25.708 CC lib/util/xor.o 00:03:25.708 CC lib/util/zipf.o 00:03:25.708 CC lib/util/md5.o 00:03:25.708 CC lib/vfio_user/host/vfio_user.o 00:03:25.708 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.708 LIB libspdk_ioat.a 00:03:25.708 SO libspdk_ioat.so.7.0 00:03:25.708 LIB libspdk_dma.a 00:03:25.708 SO libspdk_dma.so.5.0 00:03:25.708 SYMLINK libspdk_ioat.so 00:03:25.708 SYMLINK libspdk_dma.so 00:03:25.708 LIB libspdk_vfio_user.a 00:03:25.708 SO libspdk_vfio_user.so.5.0 00:03:25.708 LIB libspdk_util.a 00:03:25.708 SYMLINK libspdk_vfio_user.so 00:03:25.708 SO libspdk_util.so.10.0 00:03:25.708 SYMLINK libspdk_util.so 00:03:25.708 LIB libspdk_trace_parser.a 00:03:25.708 SO libspdk_trace_parser.so.6.0 00:03:25.708 SYMLINK libspdk_trace_parser.so 00:03:25.708 CC lib/env_dpdk/env.o 00:03:25.708 CC lib/env_dpdk/pci.o 00:03:25.708 CC lib/env_dpdk/memory.o 00:03:25.708 CC lib/env_dpdk/init.o 00:03:25.708 CC lib/json/json_parse.o 00:03:25.708 CC lib/json/json_util.o 00:03:25.708 CC lib/env_dpdk/pci_ioat.o 00:03:25.708 CC lib/conf/conf.o 00:03:25.708 CC lib/env_dpdk/threads.o 00:03:25.708 CC lib/json/json_write.o 00:03:25.708 CC lib/env_dpdk/pci_vmd.o 00:03:25.708 CC lib/env_dpdk/pci_virtio.o 00:03:25.708 CC lib/rdma_provider/common.o 00:03:25.708 CC lib/env_dpdk/pci_event.o 00:03:25.708 CC lib/env_dpdk/pci_idxd.o 00:03:25.708 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.708 CC lib/env_dpdk/sigbus_handler.o 00:03:25.708 CC lib/rdma_utils/rdma_utils.o 00:03:25.708 CC lib/env_dpdk/pci_dpdk.o 00:03:25.708 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.708 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.708 CC lib/vmd/vmd.o 00:03:25.708 CC lib/vmd/led.o 00:03:25.708 CC lib/idxd/idxd.o 00:03:25.708 CC lib/idxd/idxd_user.o 00:03:25.708 CC lib/idxd/idxd_kernel.o 00:03:25.708 LIB libspdk_rdma_provider.a 00:03:25.708 SO libspdk_rdma_provider.so.6.0 00:03:25.708 LIB libspdk_conf.a 00:03:25.708 LIB libspdk_rdma_utils.a 00:03:25.708 SO libspdk_conf.so.6.0 00:03:25.708 SYMLINK libspdk_rdma_provider.so 00:03:25.708 LIB libspdk_json.a 00:03:25.708 SO libspdk_rdma_utils.so.1.0 00:03:25.708 SO libspdk_json.so.6.0 00:03:25.708 SYMLINK libspdk_conf.so 00:03:25.708 SYMLINK libspdk_rdma_utils.so 00:03:25.708 SYMLINK libspdk_json.so 00:03:25.708 LIB libspdk_idxd.a 00:03:25.708 SO libspdk_idxd.so.12.1 00:03:25.708 LIB libspdk_vmd.a 00:03:25.708 SO libspdk_vmd.so.6.0 00:03:25.708 SYMLINK libspdk_idxd.so 00:03:25.708 SYMLINK libspdk_vmd.so 00:03:25.708 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.708 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.708 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.708 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.708 LIB libspdk_jsonrpc.a 00:03:25.708 LIB libspdk_env_dpdk.a 00:03:25.708 SO libspdk_jsonrpc.so.6.0 00:03:25.708 SO libspdk_env_dpdk.so.15.0 00:03:25.708 SYMLINK libspdk_jsonrpc.so 00:03:25.708 SYMLINK libspdk_env_dpdk.so 00:03:25.708 CC lib/rpc/rpc.o 00:03:25.708 LIB libspdk_rpc.a 00:03:25.708 SO libspdk_rpc.so.6.0 00:03:25.708 SYMLINK libspdk_rpc.so 00:03:25.708 CC lib/notify/notify.o 00:03:25.708 CC lib/notify/notify_rpc.o 00:03:25.708 CC lib/trace/trace.o 00:03:25.708 CC lib/trace/trace_flags.o 00:03:25.708 CC lib/keyring/keyring.o 00:03:25.708 CC lib/trace/trace_rpc.o 00:03:25.708 CC lib/keyring/keyring_rpc.o 00:03:25.708 LIB libspdk_notify.a 00:03:25.708 SO libspdk_notify.so.6.0 00:03:25.708 SYMLINK libspdk_notify.so 00:03:25.708 LIB libspdk_keyring.a 00:03:25.708 LIB libspdk_trace.a 00:03:25.708 SO libspdk_keyring.so.2.0 00:03:25.708 SO libspdk_trace.so.11.0 00:03:25.967 SYMLINK libspdk_keyring.so 00:03:25.967 SYMLINK libspdk_trace.so 00:03:26.226 CC lib/thread/thread.o 00:03:26.226 CC lib/thread/iobuf.o 00:03:26.226 CC lib/sock/sock.o 00:03:26.226 CC lib/sock/sock_rpc.o 00:03:26.485 LIB libspdk_sock.a 00:03:26.742 SO libspdk_sock.so.10.0 00:03:26.742 SYMLINK libspdk_sock.so 00:03:27.001 CC lib/nvme/nvme_fabric.o 00:03:27.001 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.001 CC lib/nvme/nvme_ctrlr.o 00:03:27.001 CC lib/nvme/nvme_ns_cmd.o 00:03:27.001 CC lib/nvme/nvme_pcie.o 00:03:27.001 CC lib/nvme/nvme_ns.o 00:03:27.001 CC lib/nvme/nvme_pcie_common.o 00:03:27.001 CC lib/nvme/nvme_qpair.o 00:03:27.001 CC lib/nvme/nvme.o 00:03:27.001 CC lib/nvme/nvme_quirks.o 00:03:27.001 CC lib/nvme/nvme_transport.o 00:03:27.001 CC lib/nvme/nvme_discovery.o 00:03:27.001 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:27.001 CC lib/nvme/nvme_opal.o 00:03:27.001 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:27.001 CC lib/nvme/nvme_tcp.o 00:03:27.001 CC lib/nvme/nvme_io_msg.o 00:03:27.001 CC lib/nvme/nvme_poll_group.o 00:03:27.001 CC lib/nvme/nvme_auth.o 00:03:27.001 CC lib/nvme/nvme_zns.o 00:03:27.001 CC lib/nvme/nvme_stubs.o 00:03:27.001 CC lib/nvme/nvme_cuse.o 00:03:27.001 CC lib/nvme/nvme_rdma.o 00:03:27.259 LIB libspdk_thread.a 00:03:27.259 SO libspdk_thread.so.10.1 00:03:27.259 SYMLINK libspdk_thread.so 00:03:27.826 CC lib/blob/blobstore.o 00:03:27.826 CC lib/blob/request.o 00:03:27.826 CC lib/blob/zeroes.o 00:03:27.826 CC lib/blob/blob_bs_dev.o 00:03:27.826 CC lib/fsdev/fsdev_io.o 00:03:27.826 CC lib/virtio/virtio.o 00:03:27.826 CC lib/fsdev/fsdev.o 00:03:27.826 CC lib/virtio/virtio_vfio_user.o 00:03:27.826 CC lib/virtio/virtio_vhost_user.o 00:03:27.826 CC lib/fsdev/fsdev_rpc.o 00:03:27.826 CC lib/virtio/virtio_pci.o 00:03:27.826 CC lib/accel/accel.o 00:03:27.826 CC lib/accel/accel_rpc.o 00:03:27.826 CC lib/accel/accel_sw.o 00:03:27.826 CC lib/init/json_config.o 00:03:27.826 CC lib/init/subsystem_rpc.o 00:03:27.827 CC lib/init/subsystem.o 00:03:27.827 CC lib/init/rpc.o 00:03:27.827 LIB libspdk_init.a 00:03:28.085 SO libspdk_init.so.6.0 00:03:28.085 LIB libspdk_virtio.a 00:03:28.085 SO libspdk_virtio.so.7.0 00:03:28.085 SYMLINK libspdk_init.so 00:03:28.085 SYMLINK libspdk_virtio.so 00:03:28.085 LIB libspdk_fsdev.a 00:03:28.344 SO libspdk_fsdev.so.1.0 00:03:28.344 SYMLINK libspdk_fsdev.so 00:03:28.344 CC lib/event/app.o 00:03:28.344 CC lib/event/reactor.o 00:03:28.344 CC lib/event/log_rpc.o 00:03:28.344 CC lib/event/app_rpc.o 00:03:28.344 CC lib/event/scheduler_static.o 00:03:28.602 LIB libspdk_accel.a 00:03:28.602 SO libspdk_accel.so.16.0 00:03:28.602 SYMLINK libspdk_accel.so 00:03:28.602 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.602 LIB libspdk_nvme.a 00:03:28.602 LIB libspdk_event.a 00:03:28.602 SO libspdk_event.so.14.0 00:03:28.859 SO libspdk_nvme.so.14.0 00:03:28.859 SYMLINK libspdk_event.so 00:03:28.859 CC lib/bdev/bdev.o 00:03:28.859 CC lib/bdev/bdev_rpc.o 00:03:28.859 CC lib/bdev/bdev_zone.o 00:03:28.859 CC lib/bdev/part.o 00:03:28.859 CC lib/bdev/scsi_nvme.o 00:03:28.859 SYMLINK libspdk_nvme.so 00:03:29.117 LIB libspdk_fuse_dispatcher.a 00:03:29.118 SO libspdk_fuse_dispatcher.so.1.0 00:03:29.118 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.684 LIB libspdk_blob.a 00:03:29.943 SO libspdk_blob.so.11.0 00:03:29.943 SYMLINK libspdk_blob.so 00:03:30.201 CC lib/lvol/lvol.o 00:03:30.201 CC lib/blobfs/blobfs.o 00:03:30.201 CC lib/blobfs/tree.o 00:03:30.767 LIB libspdk_bdev.a 00:03:30.767 SO libspdk_bdev.so.16.0 00:03:30.768 LIB libspdk_blobfs.a 00:03:30.768 SYMLINK libspdk_bdev.so 00:03:30.768 SO libspdk_blobfs.so.10.0 00:03:31.026 LIB libspdk_lvol.a 00:03:31.026 SYMLINK libspdk_blobfs.so 00:03:31.026 SO libspdk_lvol.so.10.0 00:03:31.026 SYMLINK libspdk_lvol.so 00:03:31.286 CC lib/scsi/dev.o 00:03:31.286 CC lib/scsi/scsi.o 00:03:31.286 CC lib/scsi/lun.o 00:03:31.286 CC lib/scsi/port.o 00:03:31.286 CC lib/scsi/scsi_rpc.o 00:03:31.286 CC lib/scsi/scsi_bdev.o 00:03:31.286 CC lib/scsi/scsi_pr.o 00:03:31.286 CC lib/scsi/task.o 00:03:31.286 CC lib/ublk/ublk.o 00:03:31.286 CC lib/ublk/ublk_rpc.o 00:03:31.286 CC lib/nbd/nbd_rpc.o 00:03:31.286 CC lib/nbd/nbd.o 00:03:31.286 CC lib/ftl/ftl_init.o 00:03:31.286 CC lib/ftl/ftl_core.o 00:03:31.286 CC lib/ftl/ftl_layout.o 00:03:31.286 CC lib/ftl/ftl_debug.o 00:03:31.286 CC lib/ftl/ftl_io.o 00:03:31.286 CC lib/ftl/ftl_l2p_flat.o 00:03:31.286 CC lib/ftl/ftl_sb.o 00:03:31.286 CC lib/ftl/ftl_l2p.o 00:03:31.286 CC lib/ftl/ftl_nv_cache.o 00:03:31.286 CC lib/ftl/ftl_writer.o 00:03:31.286 CC lib/ftl/ftl_rq.o 00:03:31.286 CC lib/ftl/ftl_band.o 00:03:31.286 CC lib/ftl/ftl_band_ops.o 00:03:31.286 CC lib/nvmf/ctrlr.o 00:03:31.286 CC lib/ftl/ftl_reloc.o 00:03:31.286 CC lib/nvmf/ctrlr_discovery.o 00:03:31.286 CC lib/ftl/ftl_l2p_cache.o 00:03:31.286 CC lib/nvmf/nvmf.o 00:03:31.286 CC lib/ftl/ftl_p2l.o 00:03:31.286 CC lib/nvmf/ctrlr_bdev.o 00:03:31.286 CC lib/nvmf/subsystem.o 00:03:31.286 CC lib/ftl/ftl_p2l_log.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.286 CC lib/nvmf/nvmf_rpc.o 00:03:31.286 CC lib/nvmf/transport.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.286 CC lib/nvmf/tcp.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.286 CC lib/nvmf/mdns_server.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.286 CC lib/nvmf/stubs.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.286 CC lib/nvmf/rdma.o 00:03:31.286 CC lib/nvmf/auth.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.286 CC lib/ftl/utils/ftl_conf.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.286 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.286 CC lib/ftl/utils/ftl_mempool.o 00:03:31.286 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.286 CC lib/ftl/utils/ftl_md.o 00:03:31.286 CC lib/ftl/utils/ftl_property.o 00:03:31.286 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.286 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.286 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.286 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.286 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.286 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.286 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:31.286 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:31.286 CC lib/ftl/base/ftl_base_dev.o 00:03:31.286 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.286 CC lib/ftl/ftl_trace.o 00:03:31.852 LIB libspdk_nbd.a 00:03:31.852 SO libspdk_nbd.so.7.0 00:03:31.852 LIB libspdk_scsi.a 00:03:31.852 SYMLINK libspdk_nbd.so 00:03:31.852 SO libspdk_scsi.so.9.0 00:03:31.852 SYMLINK libspdk_scsi.so 00:03:31.852 LIB libspdk_ublk.a 00:03:31.852 SO libspdk_ublk.so.3.0 00:03:32.110 SYMLINK libspdk_ublk.so 00:03:32.110 CC lib/iscsi/conn.o 00:03:32.110 CC lib/vhost/vhost.o 00:03:32.110 CC lib/vhost/vhost_blk.o 00:03:32.110 CC lib/vhost/vhost_rpc.o 00:03:32.110 CC lib/iscsi/init_grp.o 00:03:32.110 CC lib/iscsi/iscsi.o 00:03:32.110 CC lib/vhost/vhost_scsi.o 00:03:32.110 CC lib/vhost/rte_vhost_user.o 00:03:32.110 CC lib/iscsi/param.o 00:03:32.110 CC lib/iscsi/portal_grp.o 00:03:32.110 CC lib/iscsi/tgt_node.o 00:03:32.110 CC lib/iscsi/iscsi_subsystem.o 00:03:32.110 CC lib/iscsi/iscsi_rpc.o 00:03:32.110 CC lib/iscsi/task.o 00:03:32.429 LIB libspdk_ftl.a 00:03:32.429 SO libspdk_ftl.so.9.0 00:03:32.688 SYMLINK libspdk_ftl.so 00:03:32.947 LIB libspdk_vhost.a 00:03:32.947 LIB libspdk_nvmf.a 00:03:33.205 SO libspdk_vhost.so.8.0 00:03:33.205 SO libspdk_nvmf.so.19.0 00:03:33.205 SYMLINK libspdk_vhost.so 00:03:33.205 LIB libspdk_iscsi.a 00:03:33.205 SYMLINK libspdk_nvmf.so 00:03:33.206 SO libspdk_iscsi.so.8.0 00:03:33.465 SYMLINK libspdk_iscsi.so 00:03:34.032 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.032 LIB libspdk_env_dpdk_rpc.a 00:03:34.032 CC module/accel/dsa/accel_dsa.o 00:03:34.032 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.032 CC module/sock/posix/posix.o 00:03:34.032 CC module/keyring/linux/keyring.o 00:03:34.032 CC module/keyring/linux/keyring_rpc.o 00:03:34.032 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.032 CC module/fsdev/aio/fsdev_aio.o 00:03:34.032 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:34.032 CC module/fsdev/aio/linux_aio_mgr.o 00:03:34.032 CC module/accel/iaa/accel_iaa.o 00:03:34.032 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.032 CC module/accel/ioat/accel_ioat.o 00:03:34.032 CC module/accel/ioat/accel_ioat_rpc.o 00:03:34.032 CC module/blob/bdev/blob_bdev.o 00:03:34.032 CC module/accel/error/accel_error.o 00:03:34.032 CC module/keyring/file/keyring.o 00:03:34.032 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.032 CC module/accel/error/accel_error_rpc.o 00:03:34.032 CC module/keyring/file/keyring_rpc.o 00:03:34.032 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.032 SO libspdk_env_dpdk_rpc.so.6.0 00:03:34.291 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.291 LIB libspdk_keyring_linux.a 00:03:34.291 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.291 LIB libspdk_scheduler_gscheduler.a 00:03:34.291 LIB libspdk_keyring_file.a 00:03:34.291 LIB libspdk_accel_error.a 00:03:34.291 LIB libspdk_accel_ioat.a 00:03:34.291 LIB libspdk_scheduler_dynamic.a 00:03:34.291 LIB libspdk_accel_iaa.a 00:03:34.291 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.291 SO libspdk_keyring_linux.so.1.0 00:03:34.291 SO libspdk_keyring_file.so.2.0 00:03:34.291 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.291 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.291 SO libspdk_accel_error.so.2.0 00:03:34.291 SO libspdk_accel_ioat.so.6.0 00:03:34.291 SO libspdk_accel_iaa.so.3.0 00:03:34.291 LIB libspdk_accel_dsa.a 00:03:34.291 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.291 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.291 SYMLINK libspdk_keyring_linux.so 00:03:34.291 LIB libspdk_blob_bdev.a 00:03:34.291 SYMLINK libspdk_keyring_file.so 00:03:34.291 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.291 SYMLINK libspdk_accel_error.so 00:03:34.291 SO libspdk_accel_dsa.so.5.0 00:03:34.291 SYMLINK libspdk_accel_ioat.so 00:03:34.291 SO libspdk_blob_bdev.so.11.0 00:03:34.291 SYMLINK libspdk_accel_iaa.so 00:03:34.550 SYMLINK libspdk_accel_dsa.so 00:03:34.550 SYMLINK libspdk_blob_bdev.so 00:03:34.550 LIB libspdk_fsdev_aio.a 00:03:34.550 SO libspdk_fsdev_aio.so.1.0 00:03:34.808 SYMLINK libspdk_fsdev_aio.so 00:03:34.808 LIB libspdk_sock_posix.a 00:03:34.809 SO libspdk_sock_posix.so.6.0 00:03:34.809 SYMLINK libspdk_sock_posix.so 00:03:35.067 CC module/bdev/nvme/bdev_nvme.o 00:03:35.067 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.067 CC module/blobfs/bdev/blobfs_bdev.o 00:03:35.067 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:35.067 CC module/bdev/nvme/nvme_rpc.o 00:03:35.067 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.067 CC module/bdev/nvme/vbdev_opal.o 00:03:35.067 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.067 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.067 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:35.067 CC module/bdev/error/vbdev_error.o 00:03:35.067 CC module/bdev/malloc/bdev_malloc.o 00:03:35.067 CC module/bdev/passthru/vbdev_passthru.o 00:03:35.067 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:35.067 CC module/bdev/error/vbdev_error_rpc.o 00:03:35.067 CC module/bdev/delay/vbdev_delay.o 00:03:35.067 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:35.067 CC module/bdev/gpt/vbdev_gpt.o 00:03:35.067 CC module/bdev/gpt/gpt.o 00:03:35.067 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.067 CC module/bdev/split/vbdev_split.o 00:03:35.067 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.067 CC module/bdev/ftl/bdev_ftl.o 00:03:35.067 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:35.067 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.067 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.067 CC module/bdev/aio/bdev_aio.o 00:03:35.067 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.067 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.067 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.067 CC module/bdev/raid/bdev_raid.o 00:03:35.067 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.067 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:35.067 CC module/bdev/raid/raid0.o 00:03:35.067 CC module/bdev/raid/raid1.o 00:03:35.067 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.067 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.067 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.067 CC module/bdev/raid/concat.o 00:03:35.067 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.067 CC module/bdev/null/bdev_null.o 00:03:35.067 CC module/bdev/null/bdev_null_rpc.o 00:03:35.326 LIB libspdk_blobfs_bdev.a 00:03:35.326 SO libspdk_blobfs_bdev.so.6.0 00:03:35.326 LIB libspdk_bdev_split.a 00:03:35.326 LIB libspdk_bdev_error.a 00:03:35.326 LIB libspdk_bdev_gpt.a 00:03:35.326 LIB libspdk_bdev_null.a 00:03:35.326 SO libspdk_bdev_split.so.6.0 00:03:35.326 SO libspdk_bdev_gpt.so.6.0 00:03:35.326 SYMLINK libspdk_blobfs_bdev.so 00:03:35.326 LIB libspdk_bdev_passthru.a 00:03:35.326 SO libspdk_bdev_error.so.6.0 00:03:35.326 LIB libspdk_bdev_ftl.a 00:03:35.326 SO libspdk_bdev_null.so.6.0 00:03:35.326 LIB libspdk_bdev_aio.a 00:03:35.326 SO libspdk_bdev_passthru.so.6.0 00:03:35.326 SO libspdk_bdev_ftl.so.6.0 00:03:35.326 SYMLINK libspdk_bdev_split.so 00:03:35.326 SYMLINK libspdk_bdev_gpt.so 00:03:35.326 LIB libspdk_bdev_malloc.a 00:03:35.326 LIB libspdk_bdev_zone_block.a 00:03:35.326 SYMLINK libspdk_bdev_error.so 00:03:35.326 SO libspdk_bdev_aio.so.6.0 00:03:35.326 LIB libspdk_bdev_delay.a 00:03:35.326 LIB libspdk_bdev_iscsi.a 00:03:35.326 SYMLINK libspdk_bdev_null.so 00:03:35.326 SO libspdk_bdev_zone_block.so.6.0 00:03:35.326 SO libspdk_bdev_malloc.so.6.0 00:03:35.326 SYMLINK libspdk_bdev_passthru.so 00:03:35.326 SYMLINK libspdk_bdev_ftl.so 00:03:35.326 SO libspdk_bdev_delay.so.6.0 00:03:35.326 SO libspdk_bdev_iscsi.so.6.0 00:03:35.326 SYMLINK libspdk_bdev_aio.so 00:03:35.584 SYMLINK libspdk_bdev_zone_block.so 00:03:35.584 LIB libspdk_bdev_lvol.a 00:03:35.584 SYMLINK libspdk_bdev_malloc.so 00:03:35.584 SYMLINK libspdk_bdev_delay.so 00:03:35.584 LIB libspdk_bdev_virtio.a 00:03:35.584 SYMLINK libspdk_bdev_iscsi.so 00:03:35.584 SO libspdk_bdev_lvol.so.6.0 00:03:35.584 SO libspdk_bdev_virtio.so.6.0 00:03:35.584 SYMLINK libspdk_bdev_lvol.so 00:03:35.584 SYMLINK libspdk_bdev_virtio.so 00:03:35.843 LIB libspdk_bdev_raid.a 00:03:35.843 SO libspdk_bdev_raid.so.6.0 00:03:35.843 SYMLINK libspdk_bdev_raid.so 00:03:36.779 LIB libspdk_bdev_nvme.a 00:03:36.779 SO libspdk_bdev_nvme.so.7.0 00:03:36.779 SYMLINK libspdk_bdev_nvme.so 00:03:37.716 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.716 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.716 CC module/event/subsystems/sock/sock.o 00:03:37.716 CC module/event/subsystems/keyring/keyring.o 00:03:37.716 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.716 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.716 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.716 CC module/event/subsystems/vmd/vmd.o 00:03:37.716 CC module/event/subsystems/fsdev/fsdev.o 00:03:37.716 LIB libspdk_event_iobuf.a 00:03:37.716 LIB libspdk_event_sock.a 00:03:37.716 LIB libspdk_event_keyring.a 00:03:37.716 SO libspdk_event_iobuf.so.3.0 00:03:37.716 LIB libspdk_event_fsdev.a 00:03:37.716 SO libspdk_event_sock.so.5.0 00:03:37.716 LIB libspdk_event_scheduler.a 00:03:37.716 LIB libspdk_event_vhost_blk.a 00:03:37.716 LIB libspdk_event_vmd.a 00:03:37.716 SO libspdk_event_keyring.so.1.0 00:03:37.716 SO libspdk_event_scheduler.so.4.0 00:03:37.716 SO libspdk_event_fsdev.so.1.0 00:03:37.716 SO libspdk_event_vhost_blk.so.3.0 00:03:37.716 SYMLINK libspdk_event_iobuf.so 00:03:37.716 SO libspdk_event_vmd.so.6.0 00:03:37.716 SYMLINK libspdk_event_sock.so 00:03:37.716 SYMLINK libspdk_event_scheduler.so 00:03:37.716 SYMLINK libspdk_event_fsdev.so 00:03:37.716 SYMLINK libspdk_event_keyring.so 00:03:37.716 SYMLINK libspdk_event_vmd.so 00:03:37.716 SYMLINK libspdk_event_vhost_blk.so 00:03:37.975 CC module/event/subsystems/accel/accel.o 00:03:38.234 LIB libspdk_event_accel.a 00:03:38.234 SO libspdk_event_accel.so.6.0 00:03:38.234 SYMLINK libspdk_event_accel.so 00:03:38.803 CC module/event/subsystems/bdev/bdev.o 00:03:38.803 LIB libspdk_event_bdev.a 00:03:38.803 SO libspdk_event_bdev.so.6.0 00:03:38.803 SYMLINK libspdk_event_bdev.so 00:03:39.370 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.370 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.370 CC module/event/subsystems/ublk/ublk.o 00:03:39.370 CC module/event/subsystems/nbd/nbd.o 00:03:39.370 CC module/event/subsystems/scsi/scsi.o 00:03:39.370 LIB libspdk_event_nbd.a 00:03:39.370 LIB libspdk_event_ublk.a 00:03:39.370 SO libspdk_event_ublk.so.3.0 00:03:39.370 LIB libspdk_event_scsi.a 00:03:39.370 SO libspdk_event_nbd.so.6.0 00:03:39.370 LIB libspdk_event_nvmf.a 00:03:39.370 SO libspdk_event_scsi.so.6.0 00:03:39.370 SYMLINK libspdk_event_ublk.so 00:03:39.370 SYMLINK libspdk_event_nbd.so 00:03:39.370 SO libspdk_event_nvmf.so.6.0 00:03:39.629 SYMLINK libspdk_event_scsi.so 00:03:39.629 SYMLINK libspdk_event_nvmf.so 00:03:39.888 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.888 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.888 LIB libspdk_event_vhost_scsi.a 00:03:39.888 SO libspdk_event_vhost_scsi.so.3.0 00:03:40.148 LIB libspdk_event_iscsi.a 00:03:40.148 SYMLINK libspdk_event_vhost_scsi.so 00:03:40.148 SO libspdk_event_iscsi.so.6.0 00:03:40.148 SYMLINK libspdk_event_iscsi.so 00:03:40.406 SO libspdk.so.6.0 00:03:40.406 SYMLINK libspdk.so 00:03:40.669 TEST_HEADER include/spdk/accel.h 00:03:40.669 TEST_HEADER include/spdk/assert.h 00:03:40.669 TEST_HEADER include/spdk/accel_module.h 00:03:40.669 CC test/rpc_client/rpc_client_test.o 00:03:40.669 TEST_HEADER include/spdk/barrier.h 00:03:40.669 TEST_HEADER include/spdk/bdev_module.h 00:03:40.669 TEST_HEADER include/spdk/base64.h 00:03:40.669 TEST_HEADER include/spdk/bdev.h 00:03:40.669 TEST_HEADER include/spdk/bit_pool.h 00:03:40.669 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.669 TEST_HEADER include/spdk/bit_array.h 00:03:40.669 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.669 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.669 TEST_HEADER include/spdk/conf.h 00:03:40.669 TEST_HEADER include/spdk/config.h 00:03:40.669 TEST_HEADER include/spdk/blobfs.h 00:03:40.669 TEST_HEADER include/spdk/blob.h 00:03:40.669 TEST_HEADER include/spdk/crc16.h 00:03:40.669 TEST_HEADER include/spdk/crc32.h 00:03:40.669 TEST_HEADER include/spdk/cpuset.h 00:03:40.669 TEST_HEADER include/spdk/dif.h 00:03:40.669 TEST_HEADER include/spdk/crc64.h 00:03:40.669 TEST_HEADER include/spdk/dma.h 00:03:40.669 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.669 TEST_HEADER include/spdk/endian.h 00:03:40.669 TEST_HEADER include/spdk/env.h 00:03:40.669 TEST_HEADER include/spdk/fd_group.h 00:03:40.669 TEST_HEADER include/spdk/event.h 00:03:40.669 TEST_HEADER include/spdk/file.h 00:03:40.669 TEST_HEADER include/spdk/fd.h 00:03:40.669 TEST_HEADER include/spdk/fsdev_module.h 00:03:40.669 TEST_HEADER include/spdk/fsdev.h 00:03:40.669 TEST_HEADER include/spdk/ftl.h 00:03:40.669 CC app/spdk_lspci/spdk_lspci.o 00:03:40.669 TEST_HEADER include/spdk/hexlify.h 00:03:40.669 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:40.669 CC app/trace_record/trace_record.o 00:03:40.669 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.669 TEST_HEADER include/spdk/histogram_data.h 00:03:40.669 TEST_HEADER include/spdk/idxd.h 00:03:40.669 TEST_HEADER include/spdk/ioat.h 00:03:40.669 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.669 TEST_HEADER include/spdk/init.h 00:03:40.669 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.669 TEST_HEADER include/spdk/json.h 00:03:40.669 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.669 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.669 TEST_HEADER include/spdk/keyring.h 00:03:40.669 TEST_HEADER include/spdk/likely.h 00:03:40.669 TEST_HEADER include/spdk/keyring_module.h 00:03:40.669 TEST_HEADER include/spdk/log.h 00:03:40.669 TEST_HEADER include/spdk/lvol.h 00:03:40.669 TEST_HEADER include/spdk/md5.h 00:03:40.669 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.669 TEST_HEADER include/spdk/memory.h 00:03:40.669 TEST_HEADER include/spdk/mmio.h 00:03:40.669 CC app/spdk_nvme_identify/identify.o 00:03:40.669 CC app/spdk_nvme_perf/perf.o 00:03:40.669 TEST_HEADER include/spdk/net.h 00:03:40.669 TEST_HEADER include/spdk/nbd.h 00:03:40.669 TEST_HEADER include/spdk/nvme.h 00:03:40.669 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.669 TEST_HEADER include/spdk/notify.h 00:03:40.669 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.669 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.669 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.669 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.669 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.669 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.669 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.669 CC app/spdk_top/spdk_top.o 00:03:40.669 TEST_HEADER include/spdk/nvmf.h 00:03:40.669 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.669 CXX app/trace/trace.o 00:03:40.669 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.669 TEST_HEADER include/spdk/opal.h 00:03:40.669 TEST_HEADER include/spdk/pci_ids.h 00:03:40.669 TEST_HEADER include/spdk/opal_spec.h 00:03:40.669 TEST_HEADER include/spdk/reduce.h 00:03:40.669 TEST_HEADER include/spdk/pipe.h 00:03:40.669 TEST_HEADER include/spdk/rpc.h 00:03:40.669 TEST_HEADER include/spdk/scheduler.h 00:03:40.669 TEST_HEADER include/spdk/scsi.h 00:03:40.669 TEST_HEADER include/spdk/queue.h 00:03:40.669 TEST_HEADER include/spdk/sock.h 00:03:40.669 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.669 TEST_HEADER include/spdk/stdinc.h 00:03:40.669 TEST_HEADER include/spdk/thread.h 00:03:40.669 TEST_HEADER include/spdk/trace.h 00:03:40.669 TEST_HEADER include/spdk/string.h 00:03:40.669 TEST_HEADER include/spdk/trace_parser.h 00:03:40.669 TEST_HEADER include/spdk/util.h 00:03:40.669 TEST_HEADER include/spdk/tree.h 00:03:40.669 TEST_HEADER include/spdk/ublk.h 00:03:40.669 TEST_HEADER include/spdk/version.h 00:03:40.669 TEST_HEADER include/spdk/uuid.h 00:03:40.669 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.669 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.669 TEST_HEADER include/spdk/vhost.h 00:03:40.669 TEST_HEADER include/spdk/vmd.h 00:03:40.669 TEST_HEADER include/spdk/xor.h 00:03:40.669 CXX test/cpp_headers/accel_module.o 00:03:40.669 TEST_HEADER include/spdk/zipf.h 00:03:40.669 CXX test/cpp_headers/accel.o 00:03:40.669 CXX test/cpp_headers/assert.o 00:03:40.669 CXX test/cpp_headers/barrier.o 00:03:40.669 CXX test/cpp_headers/bdev_module.o 00:03:40.669 CXX test/cpp_headers/bdev.o 00:03:40.669 CXX test/cpp_headers/bdev_zone.o 00:03:40.669 CXX test/cpp_headers/base64.o 00:03:40.669 CXX test/cpp_headers/bit_array.o 00:03:40.669 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.669 CXX test/cpp_headers/blob.o 00:03:40.669 CXX test/cpp_headers/bit_pool.o 00:03:40.669 CXX test/cpp_headers/conf.o 00:03:40.669 CC app/spdk_tgt/spdk_tgt.o 00:03:40.669 CXX test/cpp_headers/blob_bdev.o 00:03:40.669 CXX test/cpp_headers/blobfs.o 00:03:40.669 CXX test/cpp_headers/crc16.o 00:03:40.669 CXX test/cpp_headers/cpuset.o 00:03:40.669 CXX test/cpp_headers/dif.o 00:03:40.669 CXX test/cpp_headers/crc64.o 00:03:40.669 CXX test/cpp_headers/config.o 00:03:40.669 CC app/spdk_dd/spdk_dd.o 00:03:40.669 CXX test/cpp_headers/dma.o 00:03:40.669 CXX test/cpp_headers/crc32.o 00:03:40.669 CXX test/cpp_headers/endian.o 00:03:40.669 CXX test/cpp_headers/env.o 00:03:40.669 CXX test/cpp_headers/env_dpdk.o 00:03:40.669 CXX test/cpp_headers/event.o 00:03:40.669 CXX test/cpp_headers/fd_group.o 00:03:40.669 CXX test/cpp_headers/file.o 00:03:40.669 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.669 CXX test/cpp_headers/fd.o 00:03:40.669 CXX test/cpp_headers/fsdev.o 00:03:40.669 CC app/nvmf_tgt/nvmf_main.o 00:03:40.669 CXX test/cpp_headers/ftl.o 00:03:40.669 CXX test/cpp_headers/fuse_dispatcher.o 00:03:40.669 CXX test/cpp_headers/fsdev_module.o 00:03:40.669 CXX test/cpp_headers/gpt_spec.o 00:03:40.669 CXX test/cpp_headers/histogram_data.o 00:03:40.669 CXX test/cpp_headers/hexlify.o 00:03:40.669 CXX test/cpp_headers/idxd.o 00:03:40.669 CXX test/cpp_headers/idxd_spec.o 00:03:40.669 CXX test/cpp_headers/init.o 00:03:40.669 CXX test/cpp_headers/ioat.o 00:03:40.669 CXX test/cpp_headers/ioat_spec.o 00:03:40.669 CXX test/cpp_headers/json.o 00:03:40.669 CXX test/cpp_headers/iscsi_spec.o 00:03:40.669 CXX test/cpp_headers/keyring.o 00:03:40.669 CXX test/cpp_headers/jsonrpc.o 00:03:40.669 CXX test/cpp_headers/keyring_module.o 00:03:40.670 CXX test/cpp_headers/log.o 00:03:40.670 CXX test/cpp_headers/likely.o 00:03:40.670 CXX test/cpp_headers/md5.o 00:03:40.670 CXX test/cpp_headers/lvol.o 00:03:40.670 CXX test/cpp_headers/memory.o 00:03:40.670 CXX test/cpp_headers/mmio.o 00:03:40.670 CXX test/cpp_headers/notify.o 00:03:40.670 CXX test/cpp_headers/nbd.o 00:03:40.670 CXX test/cpp_headers/net.o 00:03:40.670 CXX test/cpp_headers/nvme_intel.o 00:03:40.670 CXX test/cpp_headers/nvme.o 00:03:40.942 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.942 CXX test/cpp_headers/nvme_spec.o 00:03:40.942 CXX test/cpp_headers/nvme_zns.o 00:03:40.942 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:40.942 CXX test/cpp_headers/nvmf_cmd.o 00:03:40.942 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:40.942 CXX test/cpp_headers/nvmf.o 00:03:40.942 CXX test/cpp_headers/nvmf_spec.o 00:03:40.942 CXX test/cpp_headers/nvmf_transport.o 00:03:40.942 CXX test/cpp_headers/opal.o 00:03:40.942 CXX test/cpp_headers/opal_spec.o 00:03:40.942 CXX test/cpp_headers/pci_ids.o 00:03:40.942 CXX test/cpp_headers/pipe.o 00:03:40.942 CXX test/cpp_headers/queue.o 00:03:40.942 CXX test/cpp_headers/rpc.o 00:03:40.942 CXX test/cpp_headers/reduce.o 00:03:40.942 CXX test/cpp_headers/scsi.o 00:03:40.942 CXX test/cpp_headers/scheduler.o 00:03:40.942 CXX test/cpp_headers/scsi_spec.o 00:03:40.942 CXX test/cpp_headers/sock.o 00:03:40.942 CXX test/cpp_headers/stdinc.o 00:03:40.942 CXX test/cpp_headers/string.o 00:03:40.942 CXX test/cpp_headers/thread.o 00:03:40.942 CXX test/cpp_headers/trace.o 00:03:40.942 CXX test/cpp_headers/trace_parser.o 00:03:40.942 CXX test/cpp_headers/tree.o 00:03:40.942 CXX test/cpp_headers/ublk.o 00:03:40.942 CC examples/ioat/verify/verify.o 00:03:40.942 CC test/app/jsoncat/jsoncat.o 00:03:40.942 CXX test/cpp_headers/util.o 00:03:40.942 CC test/env/pci/pci_ut.o 00:03:40.942 CC test/thread/poller_perf/poller_perf.o 00:03:40.942 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.942 CC examples/ioat/perf/perf.o 00:03:40.942 CC test/app/histogram_perf/histogram_perf.o 00:03:40.942 CC test/env/vtophys/vtophys.o 00:03:40.942 CC test/dma/test_dma/test_dma.o 00:03:40.942 CC test/app/stub/stub.o 00:03:40.942 CC examples/util/zipf/zipf.o 00:03:40.942 CC test/env/memory/memory_ut.o 00:03:40.942 CC test/app/bdev_svc/bdev_svc.o 00:03:41.224 CC app/fio/nvme/fio_plugin.o 00:03:41.224 CC app/fio/bdev/fio_plugin.o 00:03:41.224 LINK spdk_lspci 00:03:41.490 LINK rpc_client_test 00:03:41.490 LINK spdk_nvme_discover 00:03:41.490 LINK jsoncat 00:03:41.490 CXX test/cpp_headers/uuid.o 00:03:41.490 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.490 CXX test/cpp_headers/version.o 00:03:41.490 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.490 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:41.490 CXX test/cpp_headers/vhost.o 00:03:41.490 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.490 LINK nvmf_tgt 00:03:41.490 CXX test/cpp_headers/vmd.o 00:03:41.490 LINK vtophys 00:03:41.490 CXX test/cpp_headers/zipf.o 00:03:41.490 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.490 CXX test/cpp_headers/xor.o 00:03:41.748 LINK env_dpdk_post_init 00:03:41.748 LINK poller_perf 00:03:41.748 LINK interrupt_tgt 00:03:41.748 LINK zipf 00:03:41.748 LINK iscsi_tgt 00:03:41.748 LINK stub 00:03:41.748 LINK histogram_perf 00:03:41.748 LINK verify 00:03:41.748 LINK spdk_trace_record 00:03:41.748 LINK spdk_tgt 00:03:41.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.748 LINK bdev_svc 00:03:41.748 LINK ioat_perf 00:03:41.748 LINK spdk_dd 00:03:41.748 LINK pci_ut 00:03:42.006 LINK spdk_trace 00:03:42.006 LINK test_dma 00:03:42.006 LINK spdk_nvme 00:03:42.006 LINK vhost_fuzz 00:03:42.006 CC test/event/reactor_perf/reactor_perf.o 00:03:42.006 LINK nvme_fuzz 00:03:42.006 CC test/event/event_perf/event_perf.o 00:03:42.006 CC test/event/app_repeat/app_repeat.o 00:03:42.264 CC examples/sock/hello_world/hello_sock.o 00:03:42.264 LINK spdk_bdev 00:03:42.264 CC test/event/reactor/reactor.o 00:03:42.264 CC examples/idxd/perf/perf.o 00:03:42.264 CC examples/vmd/lsvmd/lsvmd.o 00:03:42.264 CC examples/vmd/led/led.o 00:03:42.264 CC test/event/scheduler/scheduler.o 00:03:42.264 LINK spdk_top 00:03:42.264 CC examples/thread/thread/thread_ex.o 00:03:42.264 LINK mem_callbacks 00:03:42.264 LINK spdk_nvme_perf 00:03:42.264 LINK spdk_nvme_identify 00:03:42.264 CC app/vhost/vhost.o 00:03:42.264 LINK event_perf 00:03:42.264 LINK reactor_perf 00:03:42.264 LINK lsvmd 00:03:42.264 LINK reactor 00:03:42.264 LINK led 00:03:42.264 LINK app_repeat 00:03:42.264 LINK hello_sock 00:03:42.522 LINK scheduler 00:03:42.522 LINK thread 00:03:42.522 CC test/nvme/e2edp/nvme_dp.o 00:03:42.522 CC test/nvme/overhead/overhead.o 00:03:42.522 CC test/nvme/startup/startup.o 00:03:42.522 LINK idxd_perf 00:03:42.522 CC test/nvme/boot_partition/boot_partition.o 00:03:42.522 CC test/nvme/err_injection/err_injection.o 00:03:42.522 CC test/nvme/cuse/cuse.o 00:03:42.522 CC test/nvme/reset/reset.o 00:03:42.522 CC test/nvme/reserve/reserve.o 00:03:42.522 CC test/nvme/compliance/nvme_compliance.o 00:03:42.522 CC test/nvme/sgl/sgl.o 00:03:42.522 CC test/nvme/fdp/fdp.o 00:03:42.522 CC test/nvme/simple_copy/simple_copy.o 00:03:42.522 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.522 CC test/nvme/connect_stress/connect_stress.o 00:03:42.522 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.522 LINK vhost 00:03:42.522 CC test/nvme/aer/aer.o 00:03:42.522 CC test/blobfs/mkfs/mkfs.o 00:03:42.522 CC test/accel/dif/dif.o 00:03:42.522 LINK memory_ut 00:03:42.522 LINK startup 00:03:42.522 CC test/lvol/esnap/esnap.o 00:03:42.781 LINK boot_partition 00:03:42.781 LINK err_injection 00:03:42.781 LINK doorbell_aers 00:03:42.781 LINK fused_ordering 00:03:42.781 LINK reserve 00:03:42.781 LINK connect_stress 00:03:42.781 LINK overhead 00:03:42.781 LINK mkfs 00:03:42.781 LINK aer 00:03:42.781 LINK simple_copy 00:03:42.781 LINK reset 00:03:42.781 LINK nvme_dp 00:03:42.781 LINK sgl 00:03:42.781 LINK nvme_compliance 00:03:42.781 LINK fdp 00:03:42.781 CC examples/nvme/abort/abort.o 00:03:42.781 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.781 CC examples/nvme/hello_world/hello_world.o 00:03:42.781 CC examples/nvme/arbitration/arbitration.o 00:03:42.781 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.781 CC examples/nvme/hotplug/hotplug.o 00:03:42.781 CC examples/nvme/reconnect/reconnect.o 00:03:42.781 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.040 CC examples/accel/perf/accel_perf.o 00:03:43.040 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:43.040 CC examples/blob/cli/blobcli.o 00:03:43.040 CC examples/blob/hello_world/hello_blob.o 00:03:43.040 LINK iscsi_fuzz 00:03:43.040 LINK pmr_persistence 00:03:43.040 LINK cmb_copy 00:03:43.040 LINK dif 00:03:43.040 LINK hello_world 00:03:43.040 LINK hotplug 00:03:43.040 LINK abort 00:03:43.040 LINK arbitration 00:03:43.299 LINK reconnect 00:03:43.299 LINK hello_blob 00:03:43.299 LINK hello_fsdev 00:03:43.299 LINK nvme_manage 00:03:43.299 LINK accel_perf 00:03:43.299 LINK blobcli 00:03:43.558 LINK cuse 00:03:43.558 CC test/bdev/bdevio/bdevio.o 00:03:43.818 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.818 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.076 LINK bdevio 00:03:44.076 LINK hello_bdev 00:03:44.335 LINK bdevperf 00:03:44.903 CC examples/nvmf/nvmf/nvmf.o 00:03:45.162 LINK nvmf 00:03:46.098 LINK esnap 00:03:46.372 00:03:46.372 real 0m53.168s 00:03:46.372 user 6m10.340s 00:03:46.372 sys 3m4.208s 00:03:46.372 15:51:14 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:46.372 15:51:14 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.372 ************************************ 00:03:46.372 END TEST make 00:03:46.372 ************************************ 00:03:46.372 15:51:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.372 15:51:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.372 15:51:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.372 15:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.372 15:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.372 15:51:14 -- pm/common@44 -- $ pid=2524593 00:03:46.372 15:51:14 -- pm/common@50 -- $ kill -TERM 2524593 00:03:46.372 15:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.372 15:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.372 15:51:14 -- pm/common@44 -- $ pid=2524594 00:03:46.372 15:51:14 -- pm/common@50 -- $ kill -TERM 2524594 00:03:46.372 15:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.372 15:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.372 15:51:14 -- pm/common@44 -- $ pid=2524597 00:03:46.372 15:51:14 -- pm/common@50 -- $ kill -TERM 2524597 00:03:46.372 15:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.372 15:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.372 15:51:14 -- pm/common@44 -- $ pid=2524625 00:03:46.372 15:51:14 -- pm/common@50 -- $ sudo -E kill -TERM 2524625 00:03:46.632 15:51:15 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:46.632 15:51:15 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:46.632 15:51:15 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:46.632 15:51:15 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:46.632 15:51:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.632 15:51:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.632 15:51:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.632 15:51:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.632 15:51:15 -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.632 15:51:15 -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.632 15:51:15 -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.632 15:51:15 -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.632 15:51:15 -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.632 15:51:15 -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.632 15:51:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.632 15:51:15 -- scripts/common.sh@344 -- # case "$op" in 00:03:46.632 15:51:15 -- scripts/common.sh@345 -- # : 1 00:03:46.632 15:51:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.632 15:51:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.632 15:51:15 -- scripts/common.sh@365 -- # decimal 1 00:03:46.632 15:51:15 -- scripts/common.sh@353 -- # local d=1 00:03:46.632 15:51:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.632 15:51:15 -- scripts/common.sh@355 -- # echo 1 00:03:46.632 15:51:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.632 15:51:15 -- scripts/common.sh@366 -- # decimal 2 00:03:46.632 15:51:15 -- scripts/common.sh@353 -- # local d=2 00:03:46.632 15:51:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.632 15:51:15 -- scripts/common.sh@355 -- # echo 2 00:03:46.632 15:51:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.632 15:51:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.632 15:51:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.632 15:51:15 -- scripts/common.sh@368 -- # return 0 00:03:46.632 15:51:15 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.632 15:51:15 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:46.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.632 --rc genhtml_branch_coverage=1 00:03:46.632 --rc genhtml_function_coverage=1 00:03:46.632 --rc genhtml_legend=1 00:03:46.632 --rc geninfo_all_blocks=1 00:03:46.632 --rc geninfo_unexecuted_blocks=1 00:03:46.632 00:03:46.632 ' 00:03:46.632 15:51:15 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:46.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.632 --rc genhtml_branch_coverage=1 00:03:46.632 --rc genhtml_function_coverage=1 00:03:46.632 --rc genhtml_legend=1 00:03:46.632 --rc geninfo_all_blocks=1 00:03:46.632 --rc geninfo_unexecuted_blocks=1 00:03:46.632 00:03:46.632 ' 00:03:46.632 15:51:15 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:46.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.632 --rc genhtml_branch_coverage=1 00:03:46.632 --rc genhtml_function_coverage=1 00:03:46.632 --rc genhtml_legend=1 00:03:46.632 --rc geninfo_all_blocks=1 00:03:46.632 --rc geninfo_unexecuted_blocks=1 00:03:46.632 00:03:46.632 ' 00:03:46.632 15:51:15 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:46.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.632 --rc genhtml_branch_coverage=1 00:03:46.632 --rc genhtml_function_coverage=1 00:03:46.632 --rc genhtml_legend=1 00:03:46.632 --rc geninfo_all_blocks=1 00:03:46.632 --rc geninfo_unexecuted_blocks=1 00:03:46.632 00:03:46.632 ' 00:03:46.632 15:51:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.632 15:51:15 -- nvmf/common.sh@7 -- # uname -s 00:03:46.632 15:51:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.632 15:51:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.632 15:51:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.632 15:51:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.632 15:51:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.632 15:51:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.632 15:51:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.632 15:51:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.632 15:51:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.632 15:51:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.632 15:51:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:46.632 15:51:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:46.632 15:51:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.632 15:51:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.632 15:51:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:46.632 15:51:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.632 15:51:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:46.632 15:51:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.632 15:51:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.632 15:51:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.632 15:51:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.632 15:51:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.632 15:51:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.632 15:51:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.632 15:51:15 -- paths/export.sh@5 -- # export PATH 00:03:46.632 15:51:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.632 15:51:15 -- nvmf/common.sh@51 -- # : 0 00:03:46.632 15:51:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.632 15:51:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.632 15:51:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.632 15:51:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.632 15:51:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.632 15:51:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.632 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.632 15:51:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.632 15:51:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.632 15:51:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.632 15:51:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.632 15:51:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.632 15:51:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.632 15:51:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.632 15:51:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:46.632 15:51:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.632 15:51:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:46.632 15:51:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.632 15:51:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.632 15:51:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.632 15:51:15 -- spdk/autotest.sh@48 -- # udevadm_pid=2604075 00:03:46.632 15:51:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.632 15:51:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.632 15:51:15 -- pm/common@17 -- # local monitor 00:03:46.632 15:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.632 15:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.632 15:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.632 15:51:15 -- pm/common@21 -- # date +%s 00:03:46.632 15:51:15 -- pm/common@21 -- # date +%s 00:03:46.632 15:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.632 15:51:15 -- pm/common@25 -- # sleep 1 00:03:46.632 15:51:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734274275 00:03:46.632 15:51:15 -- pm/common@21 -- # date +%s 00:03:46.632 15:51:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734274275 00:03:46.632 15:51:15 -- pm/common@21 -- # date +%s 00:03:46.632 15:51:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734274275 00:03:46.632 15:51:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734274275 00:03:46.632 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734274275_collect-vmstat.pm.log 00:03:46.632 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734274275_collect-cpu-load.pm.log 00:03:46.892 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734274275_collect-cpu-temp.pm.log 00:03:46.892 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734274275_collect-bmc-pm.bmc.pm.log 00:03:47.829 15:51:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.829 15:51:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.829 15:51:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.829 15:51:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.829 15:51:16 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.829 15:51:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:47.829 15:51:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.829 15:51:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:47.829 15:51:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:47.829 15:51:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:47.829 15:51:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:47.829 15:51:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:47.829 15:51:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.829 15:51:16 -- common/autotest_common.sh@1455 -- # uname 00:03:47.829 15:51:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:47.829 15:51:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.829 15:51:16 -- common/autotest_common.sh@1475 -- # uname 00:03:47.829 15:51:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:47.829 15:51:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:47.829 15:51:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.829 lcov: LCOV version 1.15 00:03:47.829 15:51:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:09.767 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:09.767 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:13.055 15:51:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:13.055 15:51:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.055 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:04:13.055 15:51:41 -- spdk/autotest.sh@78 -- # rm -f 00:04:13.055 15:51:41 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.594 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:15.594 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:15.594 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:15.594 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:15.594 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:15.594 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:15.853 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:16.112 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:16.112 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:16.112 15:51:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:16.112 15:51:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:16.112 15:51:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:16.112 15:51:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:16.112 15:51:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.112 15:51:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:16.112 15:51:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:16.112 15:51:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.112 15:51:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.112 15:51:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:16.112 15:51:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.112 15:51:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.112 15:51:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:16.112 15:51:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:16.112 15:51:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.112 No valid GPT data, bailing 00:04:16.112 15:51:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.112 15:51:44 -- scripts/common.sh@394 -- # pt= 00:04:16.112 15:51:44 -- scripts/common.sh@395 -- # return 1 00:04:16.112 15:51:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.112 1+0 records in 00:04:16.112 1+0 records out 00:04:16.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00208272 s, 503 MB/s 00:04:16.112 15:51:44 -- spdk/autotest.sh@105 -- # sync 00:04:16.112 15:51:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.112 15:51:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.112 15:51:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.335 15:51:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:24.335 15:51:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:24.335 15:51:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:24.335 15:51:51 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:26.872 Hugepages 00:04:26.872 node hugesize free / total 00:04:26.872 node0 1048576kB 0 / 0 00:04:26.872 node0 2048kB 0 / 0 00:04:26.872 node1 1048576kB 0 / 0 00:04:26.872 node1 2048kB 0 / 0 00:04:26.872 00:04:26.872 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.872 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:26.872 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:26.872 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:26.872 15:51:55 -- spdk/autotest.sh@117 -- # uname -s 00:04:26.872 15:51:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:26.872 15:51:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:26.872 15:51:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:30.161 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.161 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.065 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.324 15:52:00 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:33.261 15:52:01 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:33.261 15:52:01 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:33.261 15:52:01 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.261 15:52:01 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:33.261 15:52:01 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:33.261 15:52:01 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:33.261 15:52:01 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.261 15:52:01 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.261 15:52:01 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:33.261 15:52:01 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:33.261 15:52:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:33.261 15:52:01 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.550 Waiting for block devices as requested 00:04:36.550 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.550 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.809 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.809 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.809 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.809 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:37.067 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:37.067 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:37.068 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.326 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.326 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.326 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:37.585 15:52:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:37.585 15:52:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:04:37.585 15:52:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:37.585 15:52:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:37.585 15:52:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:37.585 15:52:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:37.585 15:52:06 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:37.585 15:52:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:37.585 15:52:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:37.585 15:52:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:37.585 15:52:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:37.585 15:52:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:37.585 15:52:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:37.585 15:52:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:37.585 15:52:06 -- common/autotest_common.sh@1541 -- # continue 00:04:37.585 15:52:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:37.585 15:52:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.585 15:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 15:52:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:37.844 15:52:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.844 15:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 15:52:06 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.131 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.131 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.037 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.037 15:52:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.037 15:52:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.037 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.037 15:52:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.037 15:52:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:43.037 15:52:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.037 15:52:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:43.037 15:52:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:43.037 15:52:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:43.037 15:52:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.037 15:52:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:43.037 15:52:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:43.037 15:52:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:43.037 15:52:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.037 15:52:11 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.037 15:52:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:43.296 15:52:11 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:43.296 15:52:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:43.296 15:52:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:43.296 15:52:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:43.296 15:52:11 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:43.296 15:52:11 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:43.296 15:52:11 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:43.296 15:52:11 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:43.296 15:52:11 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:43.296 15:52:11 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:43.296 15:52:11 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2619656 00:04:43.297 15:52:11 -- common/autotest_common.sh@1583 -- # waitforlisten 2619656 00:04:43.297 15:52:11 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.297 15:52:11 -- common/autotest_common.sh@831 -- # '[' -z 2619656 ']' 00:04:43.297 15:52:11 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.297 15:52:11 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.297 15:52:11 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.297 15:52:11 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.297 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 [2024-12-15 15:52:11.689648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:43.297 [2024-12-15 15:52:11.689709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619656 ] 00:04:43.297 [2024-12-15 15:52:11.761094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.297 [2024-12-15 15:52:11.800468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.556 15:52:11 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.556 15:52:11 -- common/autotest_common.sh@864 -- # return 0 00:04:43.556 15:52:11 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:43.556 15:52:11 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:43.556 15:52:11 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:46.844 nvme0n1 00:04:46.844 15:52:15 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:46.844 [2024-12-15 15:52:15.188054] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:46.844 request: 00:04:46.844 { 00:04:46.844 "nvme_ctrlr_name": "nvme0", 00:04:46.844 "password": "test", 00:04:46.844 "method": "bdev_nvme_opal_revert", 00:04:46.844 "req_id": 1 00:04:46.844 } 00:04:46.844 Got JSON-RPC error response 00:04:46.844 response: 00:04:46.844 { 00:04:46.844 "code": -32602, 00:04:46.844 "message": "Invalid parameters" 00:04:46.844 } 00:04:46.844 15:52:15 -- common/autotest_common.sh@1589 -- # true 00:04:46.844 15:52:15 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:46.844 15:52:15 -- common/autotest_common.sh@1593 -- # killprocess 2619656 00:04:46.844 15:52:15 -- common/autotest_common.sh@950 -- # '[' -z 2619656 ']' 00:04:46.844 15:52:15 -- common/autotest_common.sh@954 -- # kill -0 2619656 00:04:46.844 15:52:15 -- common/autotest_common.sh@955 -- # uname 00:04:46.844 15:52:15 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.844 15:52:15 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2619656 00:04:46.844 15:52:15 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.844 15:52:15 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.844 15:52:15 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2619656' 00:04:46.844 killing process with pid 2619656 00:04:46.844 15:52:15 -- common/autotest_common.sh@969 -- # kill 2619656 00:04:46.844 15:52:15 -- common/autotest_common.sh@974 -- # wait 2619656 00:04:49.379 15:52:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:49.379 15:52:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:49.379 15:52:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.379 15:52:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.379 15:52:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:49.379 15:52:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.379 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:04:49.379 15:52:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:49.379 15:52:17 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:49.379 15:52:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.379 15:52:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.379 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:04:49.379 ************************************ 00:04:49.379 START TEST env 00:04:49.379 ************************************ 00:04:49.379 15:52:17 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:49.639 * Looking for test storage... 00:04:49.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:49.639 15:52:17 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:49.639 15:52:17 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:49.639 15:52:17 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.639 15:52:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.639 15:52:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.639 15:52:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.639 15:52:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.639 15:52:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.639 15:52:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.639 15:52:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.639 15:52:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.639 15:52:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.639 15:52:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.639 15:52:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.639 15:52:18 env -- scripts/common.sh@344 -- # case "$op" in 00:04:49.639 15:52:18 env -- scripts/common.sh@345 -- # : 1 00:04:49.639 15:52:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.639 15:52:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.639 15:52:18 env -- scripts/common.sh@365 -- # decimal 1 00:04:49.639 15:52:18 env -- scripts/common.sh@353 -- # local d=1 00:04:49.639 15:52:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.639 15:52:18 env -- scripts/common.sh@355 -- # echo 1 00:04:49.639 15:52:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.639 15:52:18 env -- scripts/common.sh@366 -- # decimal 2 00:04:49.639 15:52:18 env -- scripts/common.sh@353 -- # local d=2 00:04:49.639 15:52:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.639 15:52:18 env -- scripts/common.sh@355 -- # echo 2 00:04:49.639 15:52:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.639 15:52:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.639 15:52:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.639 15:52:18 env -- scripts/common.sh@368 -- # return 0 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:52:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.639 15:52:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.639 15:52:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.639 ************************************ 00:04:49.639 START TEST env_memory 00:04:49.639 ************************************ 00:04:49.639 15:52:18 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.639 00:04:49.639 00:04:49.639 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.639 http://cunit.sourceforge.net/ 00:04:49.639 00:04:49.639 00:04:49.639 Suite: memory 00:04:49.639 Test: alloc and free memory map ...[2024-12-15 15:52:18.121987] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.639 passed 00:04:49.639 Test: mem map translation ...[2024-12-15 15:52:18.140914] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.639 [2024-12-15 15:52:18.140928] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.639 [2024-12-15 15:52:18.140978] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.639 [2024-12-15 15:52:18.140986] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.639 passed 00:04:49.639 Test: mem map registration ...[2024-12-15 15:52:18.176464] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.639 [2024-12-15 15:52:18.176480] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:49.639 passed 00:04:49.900 Test: mem map adjacent registrations ...passed 00:04:49.900 00:04:49.900 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.900 suites 1 1 n/a 0 0 00:04:49.900 tests 4 4 4 0 0 00:04:49.900 asserts 152 152 152 0 n/a 00:04:49.900 00:04:49.900 Elapsed time = 0.125 seconds 00:04:49.900 00:04:49.900 real 0m0.132s 00:04:49.900 user 0m0.125s 00:04:49.900 sys 0m0.007s 00:04:49.900 15:52:18 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.900 15:52:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.900 ************************************ 00:04:49.900 END TEST env_memory 00:04:49.900 ************************************ 00:04:49.900 15:52:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.900 15:52:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.900 15:52:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.900 15:52:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.900 ************************************ 00:04:49.900 START TEST env_vtophys 00:04:49.900 ************************************ 00:04:49.900 15:52:18 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.900 EAL: lib.eal log level changed from notice to debug 00:04:49.900 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.900 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.900 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.900 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.900 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.900 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.900 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.900 EAL: Detected lcore 7 as core 8 on socket 0 00:04:49.900 EAL: Detected lcore 8 as core 9 on socket 0 00:04:49.900 EAL: Detected lcore 9 as core 10 on socket 0 00:04:49.900 EAL: Detected lcore 10 as core 11 on socket 0 00:04:49.900 EAL: Detected lcore 11 as core 12 on socket 0 00:04:49.900 EAL: Detected lcore 12 as core 13 on socket 0 00:04:49.900 EAL: Detected lcore 13 as core 14 on socket 0 00:04:49.900 EAL: Detected lcore 14 as core 16 on socket 0 00:04:49.900 EAL: Detected lcore 15 as core 17 on socket 0 00:04:49.900 EAL: Detected lcore 16 as core 18 on socket 0 00:04:49.900 EAL: Detected lcore 17 as core 19 on socket 0 00:04:49.900 EAL: Detected lcore 18 as core 20 on socket 0 00:04:49.900 EAL: Detected lcore 19 as core 21 on socket 0 00:04:49.900 EAL: Detected lcore 20 as core 22 on socket 0 00:04:49.900 EAL: Detected lcore 21 as core 24 on socket 0 00:04:49.900 EAL: Detected lcore 22 as core 25 on socket 0 00:04:49.900 EAL: Detected lcore 23 as core 26 on socket 0 00:04:49.900 EAL: Detected lcore 24 as core 27 on socket 0 00:04:49.900 EAL: Detected lcore 25 as core 28 on socket 0 00:04:49.900 EAL: Detected lcore 26 as core 29 on socket 0 00:04:49.900 EAL: Detected lcore 27 as core 30 on socket 0 00:04:49.900 EAL: Detected lcore 28 as core 0 on socket 1 00:04:49.900 EAL: Detected lcore 29 as core 1 on socket 1 00:04:49.900 EAL: Detected lcore 30 as core 2 on socket 1 00:04:49.900 EAL: Detected lcore 31 as core 3 on socket 1 00:04:49.900 EAL: Detected lcore 32 as core 4 on socket 1 00:04:49.900 EAL: Detected lcore 33 as core 5 on socket 1 00:04:49.900 EAL: Detected lcore 34 as core 6 on socket 1 00:04:49.900 EAL: Detected lcore 35 as core 8 on socket 1 00:04:49.900 EAL: Detected lcore 36 as core 9 on socket 1 00:04:49.900 EAL: Detected lcore 37 as core 10 on socket 1 00:04:49.900 EAL: Detected lcore 38 as core 11 on socket 1 00:04:49.900 EAL: Detected lcore 39 as core 12 on socket 1 00:04:49.900 EAL: Detected lcore 40 as core 13 on socket 1 00:04:49.900 EAL: Detected lcore 41 as core 14 on socket 1 00:04:49.900 EAL: Detected lcore 42 as core 16 on socket 1 00:04:49.900 EAL: Detected lcore 43 as core 17 on socket 1 00:04:49.900 EAL: Detected lcore 44 as core 18 on socket 1 00:04:49.900 EAL: Detected lcore 45 as core 19 on socket 1 00:04:49.900 EAL: Detected lcore 46 as core 20 on socket 1 00:04:49.900 EAL: Detected lcore 47 as core 21 on socket 1 00:04:49.900 EAL: Detected lcore 48 as core 22 on socket 1 00:04:49.900 EAL: Detected lcore 49 as core 24 on socket 1 00:04:49.900 EAL: Detected lcore 50 as core 25 on socket 1 00:04:49.900 EAL: Detected lcore 51 as core 26 on socket 1 00:04:49.900 EAL: Detected lcore 52 as core 27 on socket 1 00:04:49.900 EAL: Detected lcore 53 as core 28 on socket 1 00:04:49.900 EAL: Detected lcore 54 as core 29 on socket 1 00:04:49.900 EAL: Detected lcore 55 as core 30 on socket 1 00:04:49.900 EAL: Detected lcore 56 as core 0 on socket 0 00:04:49.900 EAL: Detected lcore 57 as core 1 on socket 0 00:04:49.900 EAL: Detected lcore 58 as core 2 on socket 0 00:04:49.900 EAL: Detected lcore 59 as core 3 on socket 0 00:04:49.900 EAL: Detected lcore 60 as core 4 on socket 0 00:04:49.900 EAL: Detected lcore 61 as core 5 on socket 0 00:04:49.900 EAL: Detected lcore 62 as core 6 on socket 0 00:04:49.900 EAL: Detected lcore 63 as core 8 on socket 0 00:04:49.900 EAL: Detected lcore 64 as core 9 on socket 0 00:04:49.900 EAL: Detected lcore 65 as core 10 on socket 0 00:04:49.900 EAL: Detected lcore 66 as core 11 on socket 0 00:04:49.900 EAL: Detected lcore 67 as core 12 on socket 0 00:04:49.900 EAL: Detected lcore 68 as core 13 on socket 0 00:04:49.900 EAL: Detected lcore 69 as core 14 on socket 0 00:04:49.900 EAL: Detected lcore 70 as core 16 on socket 0 00:04:49.900 EAL: Detected lcore 71 as core 17 on socket 0 00:04:49.900 EAL: Detected lcore 72 as core 18 on socket 0 00:04:49.900 EAL: Detected lcore 73 as core 19 on socket 0 00:04:49.900 EAL: Detected lcore 74 as core 20 on socket 0 00:04:49.900 EAL: Detected lcore 75 as core 21 on socket 0 00:04:49.900 EAL: Detected lcore 76 as core 22 on socket 0 00:04:49.901 EAL: Detected lcore 77 as core 24 on socket 0 00:04:49.901 EAL: Detected lcore 78 as core 25 on socket 0 00:04:49.901 EAL: Detected lcore 79 as core 26 on socket 0 00:04:49.901 EAL: Detected lcore 80 as core 27 on socket 0 00:04:49.901 EAL: Detected lcore 81 as core 28 on socket 0 00:04:49.901 EAL: Detected lcore 82 as core 29 on socket 0 00:04:49.901 EAL: Detected lcore 83 as core 30 on socket 0 00:04:49.901 EAL: Detected lcore 84 as core 0 on socket 1 00:04:49.901 EAL: Detected lcore 85 as core 1 on socket 1 00:04:49.901 EAL: Detected lcore 86 as core 2 on socket 1 00:04:49.901 EAL: Detected lcore 87 as core 3 on socket 1 00:04:49.901 EAL: Detected lcore 88 as core 4 on socket 1 00:04:49.901 EAL: Detected lcore 89 as core 5 on socket 1 00:04:49.901 EAL: Detected lcore 90 as core 6 on socket 1 00:04:49.901 EAL: Detected lcore 91 as core 8 on socket 1 00:04:49.901 EAL: Detected lcore 92 as core 9 on socket 1 00:04:49.901 EAL: Detected lcore 93 as core 10 on socket 1 00:04:49.901 EAL: Detected lcore 94 as core 11 on socket 1 00:04:49.901 EAL: Detected lcore 95 as core 12 on socket 1 00:04:49.901 EAL: Detected lcore 96 as core 13 on socket 1 00:04:49.901 EAL: Detected lcore 97 as core 14 on socket 1 00:04:49.901 EAL: Detected lcore 98 as core 16 on socket 1 00:04:49.901 EAL: Detected lcore 99 as core 17 on socket 1 00:04:49.901 EAL: Detected lcore 100 as core 18 on socket 1 00:04:49.901 EAL: Detected lcore 101 as core 19 on socket 1 00:04:49.901 EAL: Detected lcore 102 as core 20 on socket 1 00:04:49.901 EAL: Detected lcore 103 as core 21 on socket 1 00:04:49.901 EAL: Detected lcore 104 as core 22 on socket 1 00:04:49.901 EAL: Detected lcore 105 as core 24 on socket 1 00:04:49.901 EAL: Detected lcore 106 as core 25 on socket 1 00:04:49.901 EAL: Detected lcore 107 as core 26 on socket 1 00:04:49.901 EAL: Detected lcore 108 as core 27 on socket 1 00:04:49.901 EAL: Detected lcore 109 as core 28 on socket 1 00:04:49.901 EAL: Detected lcore 110 as core 29 on socket 1 00:04:49.901 EAL: Detected lcore 111 as core 30 on socket 1 00:04:49.901 EAL: Maximum logical cores by configuration: 128 00:04:49.901 EAL: Detected CPU lcores: 112 00:04:49.901 EAL: Detected NUMA nodes: 2 00:04:49.901 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:49.901 EAL: Detected shared linkage of DPDK 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:49.901 EAL: Registered [vdev] bus. 00:04:49.901 EAL: bus.vdev log level changed from disabled to notice 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:49.901 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:49.901 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:49.901 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:49.901 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Bus pci wants IOVA as 'DC' 00:04:49.901 EAL: Bus vdev wants IOVA as 'DC' 00:04:49.901 EAL: Buses did not request a specific IOVA mode. 00:04:49.901 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.901 EAL: Selected IOVA mode 'VA' 00:04:49.901 EAL: Probing VFIO support... 00:04:49.901 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.901 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.901 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.901 EAL: VFIO support initialized 00:04:49.901 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.901 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.901 EAL: Setting up physically contiguous memory... 00:04:49.901 EAL: Setting maximum number of open files to 524288 00:04:49.901 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.901 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.901 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.901 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.901 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.901 EAL: Hugepages will be freed exactly as allocated. 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: TSC frequency is ~2500000 KHz 00:04:49.901 EAL: Main lcore 0 is ready (tid=7f721d2a9a00;cpuset=[0]) 00:04:49.901 EAL: Trying to obtain current memory policy. 00:04:49.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.901 EAL: Restoring previous memory policy: 0 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.901 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:04:49.901 EAL: probe driver: 8086:37d2 net_i40e 00:04:49.901 EAL: Not managed by a supported kernel driver, skipped 00:04:49.901 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:04:49.901 EAL: probe driver: 8086:37d2 net_i40e 00:04:49.901 EAL: Not managed by a supported kernel driver, skipped 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.901 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.901 00:04:49.901 00:04:49.901 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.901 http://cunit.sourceforge.net/ 00:04:49.901 00:04:49.901 00:04:49.901 Suite: components_suite 00:04:49.901 Test: vtophys_malloc_test ...passed 00:04:49.901 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.901 EAL: Restoring previous memory policy: 4 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.901 EAL: Trying to obtain current memory policy. 00:04:49.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.901 EAL: Restoring previous memory policy: 4 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.901 EAL: Trying to obtain current memory policy. 00:04:49.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.901 EAL: Restoring previous memory policy: 4 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.901 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.901 EAL: Trying to obtain current memory policy. 00:04:49.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.901 EAL: Restoring previous memory policy: 4 00:04:49.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.901 EAL: request: mp_malloc_sync 00:04:49.901 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.902 EAL: Trying to obtain current memory policy. 00:04:49.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.902 EAL: Restoring previous memory policy: 4 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.902 EAL: Trying to obtain current memory policy. 00:04:49.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.902 EAL: Restoring previous memory policy: 4 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.902 EAL: Trying to obtain current memory policy. 00:04:49.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.902 EAL: Restoring previous memory policy: 4 00:04:49.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.902 EAL: request: mp_malloc_sync 00:04:49.902 EAL: No shared files mode enabled, IPC is disabled 00:04:49.902 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.161 EAL: request: mp_malloc_sync 00:04:50.161 EAL: No shared files mode enabled, IPC is disabled 00:04:50.161 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.161 EAL: Trying to obtain current memory policy. 00:04:50.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.161 EAL: Restoring previous memory policy: 4 00:04:50.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.161 EAL: request: mp_malloc_sync 00:04:50.161 EAL: No shared files mode enabled, IPC is disabled 00:04:50.161 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.161 EAL: request: mp_malloc_sync 00:04:50.161 EAL: No shared files mode enabled, IPC is disabled 00:04:50.161 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.161 EAL: Trying to obtain current memory policy. 00:04:50.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.161 EAL: Restoring previous memory policy: 4 00:04:50.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.161 EAL: request: mp_malloc_sync 00:04:50.161 EAL: No shared files mode enabled, IPC is disabled 00:04:50.161 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.421 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.421 EAL: request: mp_malloc_sync 00:04:50.421 EAL: No shared files mode enabled, IPC is disabled 00:04:50.421 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.421 EAL: Trying to obtain current memory policy. 00:04:50.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.680 EAL: Restoring previous memory policy: 4 00:04:50.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.680 EAL: request: mp_malloc_sync 00:04:50.680 EAL: No shared files mode enabled, IPC is disabled 00:04:50.680 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.940 EAL: request: mp_malloc_sync 00:04:50.940 EAL: No shared files mode enabled, IPC is disabled 00:04:50.940 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.940 passed 00:04:50.940 00:04:50.940 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.940 suites 1 1 n/a 0 0 00:04:50.940 tests 2 2 2 0 0 00:04:50.940 asserts 497 497 497 0 n/a 00:04:50.940 00:04:50.940 Elapsed time = 0.958 seconds 00:04:50.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.940 EAL: request: mp_malloc_sync 00:04:50.940 EAL: No shared files mode enabled, IPC is disabled 00:04:50.940 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.940 EAL: No shared files mode enabled, IPC is disabled 00:04:50.940 EAL: No shared files mode enabled, IPC is disabled 00:04:50.940 EAL: No shared files mode enabled, IPC is disabled 00:04:50.940 00:04:50.940 real 0m1.084s 00:04:50.940 user 0m0.633s 00:04:50.940 sys 0m0.426s 00:04:50.940 15:52:19 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.940 15:52:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.940 ************************************ 00:04:50.940 END TEST env_vtophys 00:04:50.940 ************************************ 00:04:50.940 15:52:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.940 15:52:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.940 15:52:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.940 15:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.940 ************************************ 00:04:50.940 START TEST env_pci 00:04:50.940 ************************************ 00:04:50.940 15:52:19 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.940 00:04:50.940 00:04:50.940 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.940 http://cunit.sourceforge.net/ 00:04:50.940 00:04:50.940 00:04:50.940 Suite: pci 00:04:50.940 Test: pci_hook ...[2024-12-15 15:52:19.474492] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2621145 has claimed it 00:04:50.940 EAL: Cannot find device (10000:00:01.0) 00:04:50.940 EAL: Failed to attach device on primary process 00:04:50.940 passed 00:04:50.940 00:04:50.940 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.940 suites 1 1 n/a 0 0 00:04:50.940 tests 1 1 1 0 0 00:04:50.940 asserts 25 25 25 0 n/a 00:04:50.940 00:04:50.940 Elapsed time = 0.029 seconds 00:04:50.940 00:04:50.940 real 0m0.047s 00:04:50.940 user 0m0.011s 00:04:50.940 sys 0m0.035s 00:04:50.940 15:52:19 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.200 15:52:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.200 ************************************ 00:04:51.200 END TEST env_pci 00:04:51.200 ************************************ 00:04:51.200 15:52:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:51.200 15:52:19 env -- env/env.sh@15 -- # uname 00:04:51.200 15:52:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:51.200 15:52:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:51.200 15:52:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.200 15:52:19 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:51.200 15:52:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.200 15:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.200 ************************************ 00:04:51.200 START TEST env_dpdk_post_init 00:04:51.200 ************************************ 00:04:51.200 15:52:19 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.200 EAL: Detected CPU lcores: 112 00:04:51.200 EAL: Detected NUMA nodes: 2 00:04:51.200 EAL: Detected shared linkage of DPDK 00:04:51.200 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.200 EAL: Selected IOVA mode 'VA' 00:04:51.200 EAL: VFIO support initialized 00:04:51.200 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.200 EAL: Using IOMMU type 1 (Type 1) 00:04:51.200 EAL: Ignore mapping IO port bar(1) 00:04:51.200 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:51.200 EAL: Ignore mapping IO port bar(1) 00:04:51.200 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:51.200 EAL: Ignore mapping IO port bar(1) 00:04:51.200 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:51.200 EAL: Ignore mapping IO port bar(1) 00:04:51.200 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:51.200 EAL: Ignore mapping IO port bar(1) 00:04:51.200 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:51.460 EAL: Ignore mapping IO port bar(1) 00:04:51.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:52.396 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:56.677 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:56.677 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:56.677 Starting DPDK initialization... 00:04:56.677 Starting SPDK post initialization... 00:04:56.677 SPDK NVMe probe 00:04:56.677 Attaching to 0000:d8:00.0 00:04:56.677 Attached to 0000:d8:00.0 00:04:56.677 Cleaning up... 00:04:56.677 00:04:56.677 real 0m5.343s 00:04:56.677 user 0m4.030s 00:04:56.677 sys 0m0.369s 00:04:56.677 15:52:24 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.677 15:52:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.677 ************************************ 00:04:56.677 END TEST env_dpdk_post_init 00:04:56.677 ************************************ 00:04:56.677 15:52:24 env -- env/env.sh@26 -- # uname 00:04:56.677 15:52:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.677 15:52:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.677 15:52:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.677 15:52:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.677 15:52:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.677 ************************************ 00:04:56.677 START TEST env_mem_callbacks 00:04:56.677 ************************************ 00:04:56.677 15:52:25 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.677 EAL: Detected CPU lcores: 112 00:04:56.678 EAL: Detected NUMA nodes: 2 00:04:56.678 EAL: Detected shared linkage of DPDK 00:04:56.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.678 EAL: Selected IOVA mode 'VA' 00:04:56.678 EAL: VFIO support initialized 00:04:56.678 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.678 00:04:56.678 00:04:56.678 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.678 http://cunit.sourceforge.net/ 00:04:56.678 00:04:56.678 00:04:56.678 Suite: memory 00:04:56.678 Test: test ... 00:04:56.678 register 0x200000200000 2097152 00:04:56.678 malloc 3145728 00:04:56.678 register 0x200000400000 4194304 00:04:56.678 buf 0x200000500000 len 3145728 PASSED 00:04:56.678 malloc 64 00:04:56.678 buf 0x2000004fff40 len 64 PASSED 00:04:56.678 malloc 4194304 00:04:56.678 register 0x200000800000 6291456 00:04:56.678 buf 0x200000a00000 len 4194304 PASSED 00:04:56.678 free 0x200000500000 3145728 00:04:56.678 free 0x2000004fff40 64 00:04:56.678 unregister 0x200000400000 4194304 PASSED 00:04:56.678 free 0x200000a00000 4194304 00:04:56.678 unregister 0x200000800000 6291456 PASSED 00:04:56.678 malloc 8388608 00:04:56.678 register 0x200000400000 10485760 00:04:56.678 buf 0x200000600000 len 8388608 PASSED 00:04:56.678 free 0x200000600000 8388608 00:04:56.678 unregister 0x200000400000 10485760 PASSED 00:04:56.678 passed 00:04:56.678 00:04:56.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.678 suites 1 1 n/a 0 0 00:04:56.678 tests 1 1 1 0 0 00:04:56.678 asserts 15 15 15 0 n/a 00:04:56.678 00:04:56.678 Elapsed time = 0.005 seconds 00:04:56.678 00:04:56.678 real 0m0.051s 00:04:56.678 user 0m0.014s 00:04:56.678 sys 0m0.037s 00:04:56.678 15:52:25 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.678 15:52:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.678 ************************************ 00:04:56.678 END TEST env_mem_callbacks 00:04:56.678 ************************************ 00:04:56.678 00:04:56.678 real 0m7.217s 00:04:56.678 user 0m5.046s 00:04:56.678 sys 0m1.233s 00:04:56.678 15:52:25 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.678 15:52:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.678 ************************************ 00:04:56.678 END TEST env 00:04:56.678 ************************************ 00:04:56.678 15:52:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.678 15:52:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.678 15:52:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.678 15:52:25 -- common/autotest_common.sh@10 -- # set +x 00:04:56.678 ************************************ 00:04:56.678 START TEST rpc 00:04:56.678 ************************************ 00:04:56.678 15:52:25 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.937 * Looking for test storage... 00:04:56.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.937 15:52:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.937 15:52:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.937 15:52:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.937 15:52:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.937 15:52:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.937 15:52:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.937 15:52:25 rpc -- scripts/common.sh@345 -- # : 1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.937 15:52:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.937 15:52:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.937 15:52:25 rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.937 15:52:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.937 15:52:25 rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.937 15:52:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.937 15:52:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.937 15:52:25 rpc -- scripts/common.sh@368 -- # return 0 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.937 --rc genhtml_branch_coverage=1 00:04:56.937 --rc genhtml_function_coverage=1 00:04:56.937 --rc genhtml_legend=1 00:04:56.937 --rc geninfo_all_blocks=1 00:04:56.937 --rc geninfo_unexecuted_blocks=1 00:04:56.937 00:04:56.937 ' 00:04:56.937 15:52:25 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.938 --rc genhtml_branch_coverage=1 00:04:56.938 --rc genhtml_function_coverage=1 00:04:56.938 --rc genhtml_legend=1 00:04:56.938 --rc geninfo_all_blocks=1 00:04:56.938 --rc geninfo_unexecuted_blocks=1 00:04:56.938 00:04:56.938 ' 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.938 --rc genhtml_branch_coverage=1 00:04:56.938 --rc genhtml_function_coverage=1 00:04:56.938 --rc genhtml_legend=1 00:04:56.938 --rc geninfo_all_blocks=1 00:04:56.938 --rc geninfo_unexecuted_blocks=1 00:04:56.938 00:04:56.938 ' 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.938 --rc genhtml_branch_coverage=1 00:04:56.938 --rc genhtml_function_coverage=1 00:04:56.938 --rc genhtml_legend=1 00:04:56.938 --rc geninfo_all_blocks=1 00:04:56.938 --rc geninfo_unexecuted_blocks=1 00:04:56.938 00:04:56.938 ' 00:04:56.938 15:52:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2622210 00:04:56.938 15:52:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.938 15:52:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2622210 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@831 -- # '[' -z 2622210 ']' 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.938 15:52:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:56.938 15:52:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.938 [2024-12-15 15:52:25.425132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:56.938 [2024-12-15 15:52:25.425185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2622210 ] 00:04:56.938 [2024-12-15 15:52:25.493780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.196 [2024-12-15 15:52:25.533733] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.197 [2024-12-15 15:52:25.533774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2622210' to capture a snapshot of events at runtime. 00:04:57.197 [2024-12-15 15:52:25.533785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.197 [2024-12-15 15:52:25.533794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.197 [2024-12-15 15:52:25.533802] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2622210 for offline analysis/debug. 00:04:57.197 [2024-12-15 15:52:25.533829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.197 15:52:25 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.197 15:52:25 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:57.197 15:52:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:57.197 15:52:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:57.197 15:52:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:57.197 15:52:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:57.197 15:52:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.197 15:52:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.197 15:52:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.197 ************************************ 00:04:57.197 START TEST rpc_integrity 00:04:57.197 ************************************ 00:04:57.197 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:57.197 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.197 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.197 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.197 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.197 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.197 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.457 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.457 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.457 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:57.457 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.457 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.457 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.457 { 00:04:57.457 "name": "Malloc0", 00:04:57.457 "aliases": [ 00:04:57.457 "74137d62-7675-4e17-a840-66b146db98e0" 00:04:57.457 ], 00:04:57.457 "product_name": "Malloc disk", 00:04:57.457 "block_size": 512, 00:04:57.457 "num_blocks": 16384, 00:04:57.457 "uuid": "74137d62-7675-4e17-a840-66b146db98e0", 00:04:57.457 "assigned_rate_limits": { 00:04:57.457 "rw_ios_per_sec": 0, 00:04:57.457 "rw_mbytes_per_sec": 0, 00:04:57.457 "r_mbytes_per_sec": 0, 00:04:57.457 "w_mbytes_per_sec": 0 00:04:57.457 }, 00:04:57.457 "claimed": false, 00:04:57.457 "zoned": false, 00:04:57.457 "supported_io_types": { 00:04:57.457 "read": true, 00:04:57.457 "write": true, 00:04:57.458 "unmap": true, 00:04:57.458 "flush": true, 00:04:57.458 "reset": true, 00:04:57.458 "nvme_admin": false, 00:04:57.458 "nvme_io": false, 00:04:57.458 "nvme_io_md": false, 00:04:57.458 "write_zeroes": true, 00:04:57.458 "zcopy": true, 00:04:57.458 "get_zone_info": false, 00:04:57.458 "zone_management": false, 00:04:57.458 "zone_append": false, 00:04:57.458 "compare": false, 00:04:57.458 "compare_and_write": false, 00:04:57.458 "abort": true, 00:04:57.458 "seek_hole": false, 00:04:57.458 "seek_data": false, 00:04:57.458 "copy": true, 00:04:57.458 "nvme_iov_md": false 00:04:57.458 }, 00:04:57.458 "memory_domains": [ 00:04:57.458 { 00:04:57.458 "dma_device_id": "system", 00:04:57.458 "dma_device_type": 1 00:04:57.458 }, 00:04:57.458 { 00:04:57.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.458 "dma_device_type": 2 00:04:57.458 } 00:04:57.458 ], 00:04:57.458 "driver_specific": {} 00:04:57.458 } 00:04:57.458 ]' 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 [2024-12-15 15:52:25.875833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:57.458 [2024-12-15 15:52:25.875861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.458 [2024-12-15 15:52:25.875875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1374a80 00:04:57.458 [2024-12-15 15:52:25.875883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.458 [2024-12-15 15:52:25.876943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.458 [2024-12-15 15:52:25.876965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.458 Passthru0 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.458 { 00:04:57.458 "name": "Malloc0", 00:04:57.458 "aliases": [ 00:04:57.458 "74137d62-7675-4e17-a840-66b146db98e0" 00:04:57.458 ], 00:04:57.458 "product_name": "Malloc disk", 00:04:57.458 "block_size": 512, 00:04:57.458 "num_blocks": 16384, 00:04:57.458 "uuid": "74137d62-7675-4e17-a840-66b146db98e0", 00:04:57.458 "assigned_rate_limits": { 00:04:57.458 "rw_ios_per_sec": 0, 00:04:57.458 "rw_mbytes_per_sec": 0, 00:04:57.458 "r_mbytes_per_sec": 0, 00:04:57.458 "w_mbytes_per_sec": 0 00:04:57.458 }, 00:04:57.458 "claimed": true, 00:04:57.458 "claim_type": "exclusive_write", 00:04:57.458 "zoned": false, 00:04:57.458 "supported_io_types": { 00:04:57.458 "read": true, 00:04:57.458 "write": true, 00:04:57.458 "unmap": true, 00:04:57.458 "flush": true, 00:04:57.458 "reset": true, 00:04:57.458 "nvme_admin": false, 00:04:57.458 "nvme_io": false, 00:04:57.458 "nvme_io_md": false, 00:04:57.458 "write_zeroes": true, 00:04:57.458 "zcopy": true, 00:04:57.458 "get_zone_info": false, 00:04:57.458 "zone_management": false, 00:04:57.458 "zone_append": false, 00:04:57.458 "compare": false, 00:04:57.458 "compare_and_write": false, 00:04:57.458 "abort": true, 00:04:57.458 "seek_hole": false, 00:04:57.458 "seek_data": false, 00:04:57.458 "copy": true, 00:04:57.458 "nvme_iov_md": false 00:04:57.458 }, 00:04:57.458 "memory_domains": [ 00:04:57.458 { 00:04:57.458 "dma_device_id": "system", 00:04:57.458 "dma_device_type": 1 00:04:57.458 }, 00:04:57.458 { 00:04:57.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.458 "dma_device_type": 2 00:04:57.458 } 00:04:57.458 ], 00:04:57.458 "driver_specific": {} 00:04:57.458 }, 00:04:57.458 { 00:04:57.458 "name": "Passthru0", 00:04:57.458 "aliases": [ 00:04:57.458 "020bd3af-ad30-596e-b64a-dca22b0a778f" 00:04:57.458 ], 00:04:57.458 "product_name": "passthru", 00:04:57.458 "block_size": 512, 00:04:57.458 "num_blocks": 16384, 00:04:57.458 "uuid": "020bd3af-ad30-596e-b64a-dca22b0a778f", 00:04:57.458 "assigned_rate_limits": { 00:04:57.458 "rw_ios_per_sec": 0, 00:04:57.458 "rw_mbytes_per_sec": 0, 00:04:57.458 "r_mbytes_per_sec": 0, 00:04:57.458 "w_mbytes_per_sec": 0 00:04:57.458 }, 00:04:57.458 "claimed": false, 00:04:57.458 "zoned": false, 00:04:57.458 "supported_io_types": { 00:04:57.458 "read": true, 00:04:57.458 "write": true, 00:04:57.458 "unmap": true, 00:04:57.458 "flush": true, 00:04:57.458 "reset": true, 00:04:57.458 "nvme_admin": false, 00:04:57.458 "nvme_io": false, 00:04:57.458 "nvme_io_md": false, 00:04:57.458 "write_zeroes": true, 00:04:57.458 "zcopy": true, 00:04:57.458 "get_zone_info": false, 00:04:57.458 "zone_management": false, 00:04:57.458 "zone_append": false, 00:04:57.458 "compare": false, 00:04:57.458 "compare_and_write": false, 00:04:57.458 "abort": true, 00:04:57.458 "seek_hole": false, 00:04:57.458 "seek_data": false, 00:04:57.458 "copy": true, 00:04:57.458 "nvme_iov_md": false 00:04:57.458 }, 00:04:57.458 "memory_domains": [ 00:04:57.458 { 00:04:57.458 "dma_device_id": "system", 00:04:57.458 "dma_device_type": 1 00:04:57.458 }, 00:04:57.458 { 00:04:57.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.458 "dma_device_type": 2 00:04:57.458 } 00:04:57.458 ], 00:04:57.458 "driver_specific": { 00:04:57.458 "passthru": { 00:04:57.458 "name": "Passthru0", 00:04:57.458 "base_bdev_name": "Malloc0" 00:04:57.458 } 00:04:57.458 } 00:04:57.458 } 00:04:57.458 ]' 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.458 15:52:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.458 00:04:57.458 real 0m0.251s 00:04:57.458 user 0m0.157s 00:04:57.458 sys 0m0.028s 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.458 15:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 ************************************ 00:04:57.458 END TEST rpc_integrity 00:04:57.458 ************************************ 00:04:57.718 15:52:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.718 15:52:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.718 15:52:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.718 15:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 ************************************ 00:04:57.718 START TEST rpc_plugins 00:04:57.718 ************************************ 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.718 { 00:04:57.718 "name": "Malloc1", 00:04:57.718 "aliases": [ 00:04:57.718 "7b91626b-eca8-4528-beff-51867c18d86a" 00:04:57.718 ], 00:04:57.718 "product_name": "Malloc disk", 00:04:57.718 "block_size": 4096, 00:04:57.718 "num_blocks": 256, 00:04:57.718 "uuid": "7b91626b-eca8-4528-beff-51867c18d86a", 00:04:57.718 "assigned_rate_limits": { 00:04:57.718 "rw_ios_per_sec": 0, 00:04:57.718 "rw_mbytes_per_sec": 0, 00:04:57.718 "r_mbytes_per_sec": 0, 00:04:57.718 "w_mbytes_per_sec": 0 00:04:57.718 }, 00:04:57.718 "claimed": false, 00:04:57.718 "zoned": false, 00:04:57.718 "supported_io_types": { 00:04:57.718 "read": true, 00:04:57.718 "write": true, 00:04:57.718 "unmap": true, 00:04:57.718 "flush": true, 00:04:57.718 "reset": true, 00:04:57.718 "nvme_admin": false, 00:04:57.718 "nvme_io": false, 00:04:57.718 "nvme_io_md": false, 00:04:57.718 "write_zeroes": true, 00:04:57.718 "zcopy": true, 00:04:57.718 "get_zone_info": false, 00:04:57.718 "zone_management": false, 00:04:57.718 "zone_append": false, 00:04:57.718 "compare": false, 00:04:57.718 "compare_and_write": false, 00:04:57.718 "abort": true, 00:04:57.718 "seek_hole": false, 00:04:57.718 "seek_data": false, 00:04:57.718 "copy": true, 00:04:57.718 "nvme_iov_md": false 00:04:57.718 }, 00:04:57.718 "memory_domains": [ 00:04:57.718 { 00:04:57.718 "dma_device_id": "system", 00:04:57.718 "dma_device_type": 1 00:04:57.718 }, 00:04:57.718 { 00:04:57.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.718 "dma_device_type": 2 00:04:57.718 } 00:04:57.718 ], 00:04:57.718 "driver_specific": {} 00:04:57.718 } 00:04:57.718 ]' 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.718 15:52:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.718 00:04:57.718 real 0m0.138s 00:04:57.718 user 0m0.079s 00:04:57.718 sys 0m0.028s 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.718 15:52:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 ************************************ 00:04:57.718 END TEST rpc_plugins 00:04:57.718 ************************************ 00:04:57.718 15:52:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.718 15:52:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.718 15:52:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.719 15:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.978 ************************************ 00:04:57.978 START TEST rpc_trace_cmd_test 00:04:57.978 ************************************ 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.978 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2622210", 00:04:57.978 "tpoint_group_mask": "0x8", 00:04:57.978 "iscsi_conn": { 00:04:57.978 "mask": "0x2", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "scsi": { 00:04:57.978 "mask": "0x4", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "bdev": { 00:04:57.978 "mask": "0x8", 00:04:57.978 "tpoint_mask": "0xffffffffffffffff" 00:04:57.978 }, 00:04:57.978 "nvmf_rdma": { 00:04:57.978 "mask": "0x10", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "nvmf_tcp": { 00:04:57.978 "mask": "0x20", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "ftl": { 00:04:57.978 "mask": "0x40", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "blobfs": { 00:04:57.978 "mask": "0x80", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "dsa": { 00:04:57.978 "mask": "0x200", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "thread": { 00:04:57.978 "mask": "0x400", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "nvme_pcie": { 00:04:57.978 "mask": "0x800", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "iaa": { 00:04:57.978 "mask": "0x1000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "nvme_tcp": { 00:04:57.978 "mask": "0x2000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "bdev_nvme": { 00:04:57.978 "mask": "0x4000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "sock": { 00:04:57.978 "mask": "0x8000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "blob": { 00:04:57.978 "mask": "0x10000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 }, 00:04:57.978 "bdev_raid": { 00:04:57.978 "mask": "0x20000", 00:04:57.978 "tpoint_mask": "0x0" 00:04:57.978 } 00:04:57.978 }' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.978 00:04:57.978 real 0m0.200s 00:04:57.978 user 0m0.164s 00:04:57.978 sys 0m0.027s 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.978 15:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.978 ************************************ 00:04:57.978 END TEST rpc_trace_cmd_test 00:04:57.978 ************************************ 00:04:57.978 15:52:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.978 15:52:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.978 15:52:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.978 15:52:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.978 15:52:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.978 15:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 ************************************ 00:04:58.238 START TEST rpc_daemon_integrity 00:04:58.238 ************************************ 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.238 { 00:04:58.238 "name": "Malloc2", 00:04:58.238 "aliases": [ 00:04:58.238 "9bb7457f-dcaa-456f-8184-6e6f3c307f3e" 00:04:58.238 ], 00:04:58.238 "product_name": "Malloc disk", 00:04:58.238 "block_size": 512, 00:04:58.238 "num_blocks": 16384, 00:04:58.238 "uuid": "9bb7457f-dcaa-456f-8184-6e6f3c307f3e", 00:04:58.238 "assigned_rate_limits": { 00:04:58.238 "rw_ios_per_sec": 0, 00:04:58.238 "rw_mbytes_per_sec": 0, 00:04:58.238 "r_mbytes_per_sec": 0, 00:04:58.238 "w_mbytes_per_sec": 0 00:04:58.238 }, 00:04:58.238 "claimed": false, 00:04:58.238 "zoned": false, 00:04:58.238 "supported_io_types": { 00:04:58.238 "read": true, 00:04:58.238 "write": true, 00:04:58.238 "unmap": true, 00:04:58.238 "flush": true, 00:04:58.238 "reset": true, 00:04:58.238 "nvme_admin": false, 00:04:58.238 "nvme_io": false, 00:04:58.238 "nvme_io_md": false, 00:04:58.238 "write_zeroes": true, 00:04:58.238 "zcopy": true, 00:04:58.238 "get_zone_info": false, 00:04:58.238 "zone_management": false, 00:04:58.238 "zone_append": false, 00:04:58.238 "compare": false, 00:04:58.238 "compare_and_write": false, 00:04:58.238 "abort": true, 00:04:58.238 "seek_hole": false, 00:04:58.238 "seek_data": false, 00:04:58.238 "copy": true, 00:04:58.238 "nvme_iov_md": false 00:04:58.238 }, 00:04:58.238 "memory_domains": [ 00:04:58.238 { 00:04:58.238 "dma_device_id": "system", 00:04:58.238 "dma_device_type": 1 00:04:58.238 }, 00:04:58.238 { 00:04:58.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.238 "dma_device_type": 2 00:04:58.238 } 00:04:58.238 ], 00:04:58.238 "driver_specific": {} 00:04:58.238 } 00:04:58.238 ]' 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 [2024-12-15 15:52:26.710093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:58.238 [2024-12-15 15:52:26.710119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.238 [2024-12-15 15:52:26.710133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1405690 00:04:58.238 [2024-12-15 15:52:26.710141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.238 [2024-12-15 15:52:26.711053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.238 [2024-12-15 15:52:26.711074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.238 Passthru0 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.238 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.238 { 00:04:58.238 "name": "Malloc2", 00:04:58.238 "aliases": [ 00:04:58.238 "9bb7457f-dcaa-456f-8184-6e6f3c307f3e" 00:04:58.238 ], 00:04:58.238 "product_name": "Malloc disk", 00:04:58.238 "block_size": 512, 00:04:58.238 "num_blocks": 16384, 00:04:58.238 "uuid": "9bb7457f-dcaa-456f-8184-6e6f3c307f3e", 00:04:58.238 "assigned_rate_limits": { 00:04:58.238 "rw_ios_per_sec": 0, 00:04:58.238 "rw_mbytes_per_sec": 0, 00:04:58.238 "r_mbytes_per_sec": 0, 00:04:58.238 "w_mbytes_per_sec": 0 00:04:58.238 }, 00:04:58.238 "claimed": true, 00:04:58.238 "claim_type": "exclusive_write", 00:04:58.238 "zoned": false, 00:04:58.238 "supported_io_types": { 00:04:58.238 "read": true, 00:04:58.238 "write": true, 00:04:58.238 "unmap": true, 00:04:58.238 "flush": true, 00:04:58.238 "reset": true, 00:04:58.238 "nvme_admin": false, 00:04:58.238 "nvme_io": false, 00:04:58.238 "nvme_io_md": false, 00:04:58.238 "write_zeroes": true, 00:04:58.238 "zcopy": true, 00:04:58.238 "get_zone_info": false, 00:04:58.238 "zone_management": false, 00:04:58.238 "zone_append": false, 00:04:58.238 "compare": false, 00:04:58.238 "compare_and_write": false, 00:04:58.238 "abort": true, 00:04:58.238 "seek_hole": false, 00:04:58.239 "seek_data": false, 00:04:58.239 "copy": true, 00:04:58.239 "nvme_iov_md": false 00:04:58.239 }, 00:04:58.239 "memory_domains": [ 00:04:58.239 { 00:04:58.239 "dma_device_id": "system", 00:04:58.239 "dma_device_type": 1 00:04:58.239 }, 00:04:58.239 { 00:04:58.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.239 "dma_device_type": 2 00:04:58.239 } 00:04:58.239 ], 00:04:58.239 "driver_specific": {} 00:04:58.239 }, 00:04:58.239 { 00:04:58.239 "name": "Passthru0", 00:04:58.239 "aliases": [ 00:04:58.239 "d8cfa99b-311f-53f6-b1b8-2c79fcbbb220" 00:04:58.239 ], 00:04:58.239 "product_name": "passthru", 00:04:58.239 "block_size": 512, 00:04:58.239 "num_blocks": 16384, 00:04:58.239 "uuid": "d8cfa99b-311f-53f6-b1b8-2c79fcbbb220", 00:04:58.239 "assigned_rate_limits": { 00:04:58.239 "rw_ios_per_sec": 0, 00:04:58.239 "rw_mbytes_per_sec": 0, 00:04:58.239 "r_mbytes_per_sec": 0, 00:04:58.239 "w_mbytes_per_sec": 0 00:04:58.239 }, 00:04:58.239 "claimed": false, 00:04:58.239 "zoned": false, 00:04:58.239 "supported_io_types": { 00:04:58.239 "read": true, 00:04:58.239 "write": true, 00:04:58.239 "unmap": true, 00:04:58.239 "flush": true, 00:04:58.239 "reset": true, 00:04:58.239 "nvme_admin": false, 00:04:58.239 "nvme_io": false, 00:04:58.239 "nvme_io_md": false, 00:04:58.239 "write_zeroes": true, 00:04:58.239 "zcopy": true, 00:04:58.239 "get_zone_info": false, 00:04:58.239 "zone_management": false, 00:04:58.239 "zone_append": false, 00:04:58.239 "compare": false, 00:04:58.239 "compare_and_write": false, 00:04:58.239 "abort": true, 00:04:58.239 "seek_hole": false, 00:04:58.239 "seek_data": false, 00:04:58.239 "copy": true, 00:04:58.239 "nvme_iov_md": false 00:04:58.239 }, 00:04:58.239 "memory_domains": [ 00:04:58.239 { 00:04:58.239 "dma_device_id": "system", 00:04:58.239 "dma_device_type": 1 00:04:58.239 }, 00:04:58.239 { 00:04:58.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.239 "dma_device_type": 2 00:04:58.239 } 00:04:58.239 ], 00:04:58.239 "driver_specific": { 00:04:58.239 "passthru": { 00:04:58.239 "name": "Passthru0", 00:04:58.239 "base_bdev_name": "Malloc2" 00:04:58.239 } 00:04:58.239 } 00:04:58.239 } 00:04:58.239 ]' 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.239 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.498 15:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.498 00:04:58.498 real 0m0.266s 00:04:58.498 user 0m0.167s 00:04:58.498 sys 0m0.039s 00:04:58.498 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.498 15:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.498 ************************************ 00:04:58.498 END TEST rpc_daemon_integrity 00:04:58.498 ************************************ 00:04:58.498 15:52:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:58.498 15:52:26 rpc -- rpc/rpc.sh@84 -- # killprocess 2622210 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@950 -- # '[' -z 2622210 ']' 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@954 -- # kill -0 2622210 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2622210 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2622210' 00:04:58.498 killing process with pid 2622210 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@969 -- # kill 2622210 00:04:58.498 15:52:26 rpc -- common/autotest_common.sh@974 -- # wait 2622210 00:04:58.758 00:04:58.758 real 0m2.061s 00:04:58.758 user 0m2.547s 00:04:58.758 sys 0m0.768s 00:04:58.758 15:52:27 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.758 15:52:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.758 ************************************ 00:04:58.758 END TEST rpc 00:04:58.758 ************************************ 00:04:58.758 15:52:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.758 15:52:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.758 15:52:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.758 15:52:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.017 ************************************ 00:04:59.017 START TEST skip_rpc 00:04:59.017 ************************************ 00:04:59.017 15:52:27 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.017 * Looking for test storage... 00:04:59.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:59.017 15:52:27 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.017 15:52:27 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.017 15:52:27 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.017 15:52:27 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.017 15:52:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.018 15:52:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.018 --rc genhtml_branch_coverage=1 00:04:59.018 --rc genhtml_function_coverage=1 00:04:59.018 --rc genhtml_legend=1 00:04:59.018 --rc geninfo_all_blocks=1 00:04:59.018 --rc geninfo_unexecuted_blocks=1 00:04:59.018 00:04:59.018 ' 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.018 --rc genhtml_branch_coverage=1 00:04:59.018 --rc genhtml_function_coverage=1 00:04:59.018 --rc genhtml_legend=1 00:04:59.018 --rc geninfo_all_blocks=1 00:04:59.018 --rc geninfo_unexecuted_blocks=1 00:04:59.018 00:04:59.018 ' 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.018 --rc genhtml_branch_coverage=1 00:04:59.018 --rc genhtml_function_coverage=1 00:04:59.018 --rc genhtml_legend=1 00:04:59.018 --rc geninfo_all_blocks=1 00:04:59.018 --rc geninfo_unexecuted_blocks=1 00:04:59.018 00:04:59.018 ' 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.018 --rc genhtml_branch_coverage=1 00:04:59.018 --rc genhtml_function_coverage=1 00:04:59.018 --rc genhtml_legend=1 00:04:59.018 --rc geninfo_all_blocks=1 00:04:59.018 --rc geninfo_unexecuted_blocks=1 00:04:59.018 00:04:59.018 ' 00:04:59.018 15:52:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:59.018 15:52:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:59.018 15:52:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.018 15:52:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.018 ************************************ 00:04:59.018 START TEST skip_rpc 00:04:59.018 ************************************ 00:04:59.018 15:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:59.018 15:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2622856 00:04:59.018 15:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.018 15:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:59.018 15:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:59.277 [2024-12-15 15:52:27.621256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:59.277 [2024-12-15 15:52:27.621300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2622856 ] 00:04:59.277 [2024-12-15 15:52:27.690015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.277 [2024-12-15 15:52:27.728657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.549 15:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:04.549 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:04.549 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2622856 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2622856 ']' 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2622856 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2622856 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2622856' 00:05:04.550 killing process with pid 2622856 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2622856 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2622856 00:05:04.550 00:05:04.550 real 0m5.392s 00:05:04.550 user 0m5.140s 00:05:04.550 sys 0m0.303s 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.550 15:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.550 ************************************ 00:05:04.550 END TEST skip_rpc 00:05:04.550 ************************************ 00:05:04.550 15:52:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:04.550 15:52:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.550 15:52:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.550 15:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.550 ************************************ 00:05:04.550 START TEST skip_rpc_with_json 00:05:04.550 ************************************ 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2623731 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2623731 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2623731 ']' 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.550 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.550 [2024-12-15 15:52:33.091111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:04.550 [2024-12-15 15:52:33.091155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623731 ] 00:05:04.809 [2024-12-15 15:52:33.162523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.809 [2024-12-15 15:52:33.201347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.068 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.068 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:05.068 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:05.068 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.068 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.068 [2024-12-15 15:52:33.403092] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:05.068 request: 00:05:05.068 { 00:05:05.068 "trtype": "tcp", 00:05:05.068 "method": "nvmf_get_transports", 00:05:05.069 "req_id": 1 00:05:05.069 } 00:05:05.069 Got JSON-RPC error response 00:05:05.069 response: 00:05:05.069 { 00:05:05.069 "code": -19, 00:05:05.069 "message": "No such device" 00:05:05.069 } 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.069 [2024-12-15 15:52:33.411182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.069 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:05.069 { 00:05:05.069 "subsystems": [ 00:05:05.069 { 00:05:05.069 "subsystem": "fsdev", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "fsdev_set_opts", 00:05:05.069 "params": { 00:05:05.069 "fsdev_io_pool_size": 65535, 00:05:05.069 "fsdev_io_cache_size": 256 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "keyring", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "iobuf", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "iobuf_set_options", 00:05:05.069 "params": { 00:05:05.069 "small_pool_count": 8192, 00:05:05.069 "large_pool_count": 1024, 00:05:05.069 "small_bufsize": 8192, 00:05:05.069 "large_bufsize": 135168 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "sock", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "sock_set_default_impl", 00:05:05.069 "params": { 00:05:05.069 "impl_name": "posix" 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "sock_impl_set_options", 00:05:05.069 "params": { 00:05:05.069 "impl_name": "ssl", 00:05:05.069 "recv_buf_size": 4096, 00:05:05.069 "send_buf_size": 4096, 00:05:05.069 "enable_recv_pipe": true, 00:05:05.069 "enable_quickack": false, 00:05:05.069 "enable_placement_id": 0, 00:05:05.069 "enable_zerocopy_send_server": true, 00:05:05.069 "enable_zerocopy_send_client": false, 00:05:05.069 "zerocopy_threshold": 0, 00:05:05.069 "tls_version": 0, 00:05:05.069 "enable_ktls": false 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "sock_impl_set_options", 00:05:05.069 "params": { 00:05:05.069 "impl_name": "posix", 00:05:05.069 "recv_buf_size": 2097152, 00:05:05.069 "send_buf_size": 2097152, 00:05:05.069 "enable_recv_pipe": true, 00:05:05.069 "enable_quickack": false, 00:05:05.069 "enable_placement_id": 0, 00:05:05.069 "enable_zerocopy_send_server": true, 00:05:05.069 "enable_zerocopy_send_client": false, 00:05:05.069 "zerocopy_threshold": 0, 00:05:05.069 "tls_version": 0, 00:05:05.069 "enable_ktls": false 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "vmd", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "accel", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "accel_set_options", 00:05:05.069 "params": { 00:05:05.069 "small_cache_size": 128, 00:05:05.069 "large_cache_size": 16, 00:05:05.069 "task_count": 2048, 00:05:05.069 "sequence_count": 2048, 00:05:05.069 "buf_count": 2048 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "bdev", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "bdev_set_options", 00:05:05.069 "params": { 00:05:05.069 "bdev_io_pool_size": 65535, 00:05:05.069 "bdev_io_cache_size": 256, 00:05:05.069 "bdev_auto_examine": true, 00:05:05.069 "iobuf_small_cache_size": 128, 00:05:05.069 "iobuf_large_cache_size": 16 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "bdev_raid_set_options", 00:05:05.069 "params": { 00:05:05.069 "process_window_size_kb": 1024, 00:05:05.069 "process_max_bandwidth_mb_sec": 0 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "bdev_iscsi_set_options", 00:05:05.069 "params": { 00:05:05.069 "timeout_sec": 30 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "bdev_nvme_set_options", 00:05:05.069 "params": { 00:05:05.069 "action_on_timeout": "none", 00:05:05.069 "timeout_us": 0, 00:05:05.069 "timeout_admin_us": 0, 00:05:05.069 "keep_alive_timeout_ms": 10000, 00:05:05.069 "arbitration_burst": 0, 00:05:05.069 "low_priority_weight": 0, 00:05:05.069 "medium_priority_weight": 0, 00:05:05.069 "high_priority_weight": 0, 00:05:05.069 "nvme_adminq_poll_period_us": 10000, 00:05:05.069 "nvme_ioq_poll_period_us": 0, 00:05:05.069 "io_queue_requests": 0, 00:05:05.069 "delay_cmd_submit": true, 00:05:05.069 "transport_retry_count": 4, 00:05:05.069 "bdev_retry_count": 3, 00:05:05.069 "transport_ack_timeout": 0, 00:05:05.069 "ctrlr_loss_timeout_sec": 0, 00:05:05.069 "reconnect_delay_sec": 0, 00:05:05.069 "fast_io_fail_timeout_sec": 0, 00:05:05.069 "disable_auto_failback": false, 00:05:05.069 "generate_uuids": false, 00:05:05.069 "transport_tos": 0, 00:05:05.069 "nvme_error_stat": false, 00:05:05.069 "rdma_srq_size": 0, 00:05:05.069 "io_path_stat": false, 00:05:05.069 "allow_accel_sequence": false, 00:05:05.069 "rdma_max_cq_size": 0, 00:05:05.069 "rdma_cm_event_timeout_ms": 0, 00:05:05.069 "dhchap_digests": [ 00:05:05.069 "sha256", 00:05:05.069 "sha384", 00:05:05.069 "sha512" 00:05:05.069 ], 00:05:05.069 "dhchap_dhgroups": [ 00:05:05.069 "null", 00:05:05.069 "ffdhe2048", 00:05:05.069 "ffdhe3072", 00:05:05.069 "ffdhe4096", 00:05:05.069 "ffdhe6144", 00:05:05.069 "ffdhe8192" 00:05:05.069 ] 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "bdev_nvme_set_hotplug", 00:05:05.069 "params": { 00:05:05.069 "period_us": 100000, 00:05:05.069 "enable": false 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "bdev_wait_for_examine" 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "scsi", 00:05:05.069 "config": null 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "scheduler", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "framework_set_scheduler", 00:05:05.069 "params": { 00:05:05.069 "name": "static" 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "vhost_scsi", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "vhost_blk", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "ublk", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "nbd", 00:05:05.069 "config": [] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "nvmf", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "nvmf_set_config", 00:05:05.069 "params": { 00:05:05.069 "discovery_filter": "match_any", 00:05:05.069 "admin_cmd_passthru": { 00:05:05.069 "identify_ctrlr": false 00:05:05.069 }, 00:05:05.069 "dhchap_digests": [ 00:05:05.069 "sha256", 00:05:05.069 "sha384", 00:05:05.069 "sha512" 00:05:05.069 ], 00:05:05.069 "dhchap_dhgroups": [ 00:05:05.069 "null", 00:05:05.069 "ffdhe2048", 00:05:05.069 "ffdhe3072", 00:05:05.069 "ffdhe4096", 00:05:05.069 "ffdhe6144", 00:05:05.069 "ffdhe8192" 00:05:05.069 ] 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "nvmf_set_max_subsystems", 00:05:05.069 "params": { 00:05:05.069 "max_subsystems": 1024 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "nvmf_set_crdt", 00:05:05.069 "params": { 00:05:05.069 "crdt1": 0, 00:05:05.069 "crdt2": 0, 00:05:05.069 "crdt3": 0 00:05:05.069 } 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "method": "nvmf_create_transport", 00:05:05.069 "params": { 00:05:05.069 "trtype": "TCP", 00:05:05.069 "max_queue_depth": 128, 00:05:05.069 "max_io_qpairs_per_ctrlr": 127, 00:05:05.069 "in_capsule_data_size": 4096, 00:05:05.069 "max_io_size": 131072, 00:05:05.069 "io_unit_size": 131072, 00:05:05.069 "max_aq_depth": 128, 00:05:05.069 "num_shared_buffers": 511, 00:05:05.069 "buf_cache_size": 4294967295, 00:05:05.069 "dif_insert_or_strip": false, 00:05:05.069 "zcopy": false, 00:05:05.069 "c2h_success": true, 00:05:05.069 "sock_priority": 0, 00:05:05.069 "abort_timeout_sec": 1, 00:05:05.069 "ack_timeout": 0, 00:05:05.069 "data_wr_pool_size": 0 00:05:05.069 } 00:05:05.069 } 00:05:05.069 ] 00:05:05.069 }, 00:05:05.069 { 00:05:05.069 "subsystem": "iscsi", 00:05:05.069 "config": [ 00:05:05.069 { 00:05:05.069 "method": "iscsi_set_options", 00:05:05.069 "params": { 00:05:05.069 "node_base": "iqn.2016-06.io.spdk", 00:05:05.069 "max_sessions": 128, 00:05:05.069 "max_connections_per_session": 2, 00:05:05.069 "max_queue_depth": 64, 00:05:05.069 "default_time2wait": 2, 00:05:05.069 "default_time2retain": 20, 00:05:05.069 "first_burst_length": 8192, 00:05:05.069 "immediate_data": true, 00:05:05.069 "allow_duplicated_isid": false, 00:05:05.070 "error_recovery_level": 0, 00:05:05.070 "nop_timeout": 60, 00:05:05.070 "nop_in_interval": 30, 00:05:05.070 "disable_chap": false, 00:05:05.070 "require_chap": false, 00:05:05.070 "mutual_chap": false, 00:05:05.070 "chap_group": 0, 00:05:05.070 "max_large_datain_per_connection": 64, 00:05:05.070 "max_r2t_per_connection": 4, 00:05:05.070 "pdu_pool_size": 36864, 00:05:05.070 "immediate_data_pool_size": 16384, 00:05:05.070 "data_out_pool_size": 2048 00:05:05.070 } 00:05:05.070 } 00:05:05.070 ] 00:05:05.070 } 00:05:05.070 ] 00:05:05.070 } 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2623731 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2623731 ']' 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2623731 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2623731 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2623731' 00:05:05.070 killing process with pid 2623731 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2623731 00:05:05.070 15:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2623731 00:05:05.638 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2623970 00:05:05.638 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:05.638 15:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2623970 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2623970 ']' 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2623970 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.909 15:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2623970 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2623970' 00:05:10.909 killing process with pid 2623970 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2623970 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2623970 00:05:10.909 15:52:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:10.910 00:05:10.910 real 0m6.285s 00:05:10.910 user 0m5.954s 00:05:10.910 sys 0m0.625s 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.910 ************************************ 00:05:10.910 END TEST skip_rpc_with_json 00:05:10.910 ************************************ 00:05:10.910 15:52:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.910 15:52:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.910 15:52:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.910 15:52:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.910 ************************************ 00:05:10.910 START TEST skip_rpc_with_delay 00:05:10.910 ************************************ 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.910 [2024-12-15 15:52:39.448942] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.910 [2024-12-15 15:52:39.449008] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.910 00:05:10.910 real 0m0.068s 00:05:10.910 user 0m0.035s 00:05:10.910 sys 0m0.032s 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.910 15:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.910 ************************************ 00:05:10.910 END TEST skip_rpc_with_delay 00:05:10.910 ************************************ 00:05:11.169 15:52:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:11.169 15:52:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:11.169 15:52:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:11.169 15:52:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.169 15:52:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.169 15:52:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.169 ************************************ 00:05:11.169 START TEST exit_on_failed_rpc_init 00:05:11.169 ************************************ 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2625034 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2625034 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2625034 ']' 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.169 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.169 [2024-12-15 15:52:39.577044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:11.169 [2024-12-15 15:52:39.577084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625034 ] 00:05:11.169 [2024-12-15 15:52:39.647404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.169 [2024-12-15 15:52:39.686592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.429 15:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.429 [2024-12-15 15:52:39.934738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:11.429 [2024-12-15 15:52:39.934791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625086 ] 00:05:11.688 [2024-12-15 15:52:40.002568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.688 [2024-12-15 15:52:40.043360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.688 [2024-12-15 15:52:40.043421] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:11.688 [2024-12-15 15:52:40.043433] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:11.688 [2024-12-15 15:52:40.043441] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2625034 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2625034 ']' 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2625034 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2625034 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2625034' 00:05:11.688 killing process with pid 2625034 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2625034 00:05:11.688 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2625034 00:05:11.947 00:05:11.947 real 0m0.956s 00:05:11.947 user 0m0.998s 00:05:11.947 sys 0m0.423s 00:05:11.947 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.947 15:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.947 ************************************ 00:05:11.947 END TEST exit_on_failed_rpc_init 00:05:11.947 ************************************ 00:05:12.206 15:52:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:12.206 00:05:12.206 real 0m13.201s 00:05:12.206 user 0m12.315s 00:05:12.206 sys 0m1.726s 00:05:12.206 15:52:40 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.206 15:52:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.206 ************************************ 00:05:12.206 END TEST skip_rpc 00:05:12.206 ************************************ 00:05:12.206 15:52:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:12.206 15:52:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.206 15:52:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.206 15:52:40 -- common/autotest_common.sh@10 -- # set +x 00:05:12.206 ************************************ 00:05:12.206 START TEST rpc_client 00:05:12.206 ************************************ 00:05:12.206 15:52:40 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:12.206 * Looking for test storage... 00:05:12.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:12.206 15:52:40 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.206 15:52:40 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.206 15:52:40 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.465 15:52:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.465 --rc genhtml_branch_coverage=1 00:05:12.465 --rc genhtml_function_coverage=1 00:05:12.465 --rc genhtml_legend=1 00:05:12.465 --rc geninfo_all_blocks=1 00:05:12.465 --rc geninfo_unexecuted_blocks=1 00:05:12.465 00:05:12.465 ' 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.465 --rc genhtml_branch_coverage=1 00:05:12.465 --rc genhtml_function_coverage=1 00:05:12.465 --rc genhtml_legend=1 00:05:12.465 --rc geninfo_all_blocks=1 00:05:12.465 --rc geninfo_unexecuted_blocks=1 00:05:12.465 00:05:12.465 ' 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.465 --rc genhtml_branch_coverage=1 00:05:12.465 --rc genhtml_function_coverage=1 00:05:12.465 --rc genhtml_legend=1 00:05:12.465 --rc geninfo_all_blocks=1 00:05:12.465 --rc geninfo_unexecuted_blocks=1 00:05:12.465 00:05:12.465 ' 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.465 --rc genhtml_branch_coverage=1 00:05:12.465 --rc genhtml_function_coverage=1 00:05:12.465 --rc genhtml_legend=1 00:05:12.465 --rc geninfo_all_blocks=1 00:05:12.465 --rc geninfo_unexecuted_blocks=1 00:05:12.465 00:05:12.465 ' 00:05:12.465 15:52:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:12.465 OK 00:05:12.465 15:52:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:12.465 00:05:12.465 real 0m0.220s 00:05:12.465 user 0m0.125s 00:05:12.465 sys 0m0.113s 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.465 15:52:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:12.465 ************************************ 00:05:12.465 END TEST rpc_client 00:05:12.465 ************************************ 00:05:12.465 15:52:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.465 15:52:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.465 15:52:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.465 15:52:40 -- common/autotest_common.sh@10 -- # set +x 00:05:12.465 ************************************ 00:05:12.465 START TEST json_config 00:05:12.465 ************************************ 00:05:12.465 15:52:40 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.465 15:52:40 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.465 15:52:40 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.465 15:52:40 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.725 15:52:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.725 15:52:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.725 15:52:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.725 15:52:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.725 15:52:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.725 15:52:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:12.725 15:52:41 json_config -- scripts/common.sh@345 -- # : 1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.725 15:52:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.725 15:52:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@353 -- # local d=1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.725 15:52:41 json_config -- scripts/common.sh@355 -- # echo 1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.725 15:52:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@353 -- # local d=2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.725 15:52:41 json_config -- scripts/common.sh@355 -- # echo 2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.725 15:52:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.725 15:52:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.725 15:52:41 json_config -- scripts/common.sh@368 -- # return 0 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.725 --rc genhtml_branch_coverage=1 00:05:12.725 --rc genhtml_function_coverage=1 00:05:12.725 --rc genhtml_legend=1 00:05:12.725 --rc geninfo_all_blocks=1 00:05:12.725 --rc geninfo_unexecuted_blocks=1 00:05:12.725 00:05:12.725 ' 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.725 --rc genhtml_branch_coverage=1 00:05:12.725 --rc genhtml_function_coverage=1 00:05:12.725 --rc genhtml_legend=1 00:05:12.725 --rc geninfo_all_blocks=1 00:05:12.725 --rc geninfo_unexecuted_blocks=1 00:05:12.725 00:05:12.725 ' 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.725 --rc genhtml_branch_coverage=1 00:05:12.725 --rc genhtml_function_coverage=1 00:05:12.725 --rc genhtml_legend=1 00:05:12.725 --rc geninfo_all_blocks=1 00:05:12.725 --rc geninfo_unexecuted_blocks=1 00:05:12.725 00:05:12.725 ' 00:05:12.725 15:52:41 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.725 --rc genhtml_branch_coverage=1 00:05:12.725 --rc genhtml_function_coverage=1 00:05:12.725 --rc genhtml_legend=1 00:05:12.725 --rc geninfo_all_blocks=1 00:05:12.725 --rc geninfo_unexecuted_blocks=1 00:05:12.725 00:05:12.725 ' 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:12.726 15:52:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.726 15:52:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.726 15:52:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.726 15:52:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.726 15:52:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.726 15:52:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.726 15:52:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.726 15:52:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:12.726 15:52:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@51 -- # : 0 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.726 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.726 15:52:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:12.726 INFO: JSON configuration test init 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 15:52:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:12.726 15:52:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.726 15:52:41 json_config -- json_config/common.sh@10 -- # shift 00:05:12.726 15:52:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.726 15:52:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.726 15:52:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.726 15:52:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.726 15:52:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.726 15:52:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2625449 00:05:12.726 15:52:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.726 Waiting for target to run... 00:05:12.726 15:52:41 json_config -- json_config/common.sh@25 -- # waitforlisten 2625449 /var/tmp/spdk_tgt.sock 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 2625449 ']' 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.726 15:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 15:52:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:12.726 [2024-12-15 15:52:41.152846] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:12.726 [2024-12-15 15:52:41.152901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625449 ] 00:05:12.985 [2024-12-15 15:52:41.434485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.985 [2024-12-15 15:52:41.455997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.553 15:52:41 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.553 15:52:41 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:13.553 15:52:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.553 00:05:13.554 15:52:41 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:13.554 15:52:41 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:13.554 15:52:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.554 15:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 15:52:41 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:13.554 15:52:41 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:13.554 15:52:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.554 15:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 15:52:42 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:13.554 15:52:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:13.554 15:52:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.841 15:52:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@54 -- # sort 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:16.841 15:52:45 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@438 -- # [[ phy-fallback != virt ]] 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:16.841 15:52:45 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:16.841 15:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:24.961 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:24.961 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:24.961 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:24.961 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@438 -- # is_hw=yes 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@444 -- # rdma_device_init 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@62 -- # uname 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:24.961 15:52:52 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@526 -- # allocate_nic_ips 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:24.962 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:24.962 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:24.962 altname enp217s0f0np0 00:05:24.962 altname ens818f0np0 00:05:24.962 inet 192.168.100.8/24 scope global mlx_0_0 00:05:24.962 valid_lft forever preferred_lft forever 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:24.962 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:24.962 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:24.962 altname enp217s0f1np1 00:05:24.962 altname ens818f1np1 00:05:24.962 inet 192.168.100.9/24 scope global mlx_0_1 00:05:24.962 valid_lft forever preferred_lft forever 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@446 -- # return 0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:05:24.962 192.168.100.9' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:05:24.962 192.168.100.9' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@481 -- # head -n 1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:05:24.962 192.168.100.9' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@482 -- # tail -n +2 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@482 -- # head -n 1 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:05:24.962 15:52:52 json_config -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:05:24.962 15:52:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:24.962 15:52:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.962 15:52:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.962 MallocForNvmf0 00:05:24.962 15:52:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.962 15:52:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.962 MallocForNvmf1 00:05:24.962 15:52:52 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:24.962 15:52:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:24.962 [2024-12-15 15:52:53.001563] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:24.962 [2024-12-15 15:52:53.030954] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18977f0/0x176cb30) succeed. 00:05:24.962 [2024-12-15 15:52:53.044277] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1896830/0x17ec640) succeed. 00:05:24.962 15:52:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.962 15:52:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.962 15:52:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.962 15:52:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.962 15:52:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.962 15:52:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.221 15:52:53 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:25.221 15:52:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:25.481 [2024-12-15 15:52:53.812229] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:25.481 15:52:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:25.481 15:52:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.481 15:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.481 15:52:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:25.481 15:52:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.481 15:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.481 15:52:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:25.481 15:52:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.481 15:52:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.740 MallocBdevForConfigChangeCheck 00:05:25.740 15:52:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:25.740 15:52:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.740 15:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.740 15:52:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:25.740 15:52:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.999 15:52:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:25.999 INFO: shutting down applications... 00:05:25.999 15:52:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:25.999 15:52:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:25.999 15:52:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:25.999 15:52:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:28.534 Calling clear_iscsi_subsystem 00:05:28.534 Calling clear_nvmf_subsystem 00:05:28.534 Calling clear_nbd_subsystem 00:05:28.534 Calling clear_ublk_subsystem 00:05:28.534 Calling clear_vhost_blk_subsystem 00:05:28.534 Calling clear_vhost_scsi_subsystem 00:05:28.534 Calling clear_bdev_subsystem 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:28.534 15:52:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:28.794 15:52:57 json_config -- json_config/json_config.sh@352 -- # break 00:05:28.794 15:52:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:28.794 15:52:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:28.794 15:52:57 json_config -- json_config/common.sh@31 -- # local app=target 00:05:28.794 15:52:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.794 15:52:57 json_config -- json_config/common.sh@35 -- # [[ -n 2625449 ]] 00:05:28.794 15:52:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2625449 00:05:28.794 15:52:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.794 15:52:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.794 15:52:57 json_config -- json_config/common.sh@41 -- # kill -0 2625449 00:05:28.794 15:52:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.362 15:52:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.362 15:52:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.362 15:52:57 json_config -- json_config/common.sh@41 -- # kill -0 2625449 00:05:29.362 15:52:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.362 15:52:57 json_config -- json_config/common.sh@43 -- # break 00:05:29.362 15:52:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.362 15:52:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.362 SPDK target shutdown done 00:05:29.362 15:52:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:29.362 INFO: relaunching applications... 00:05:29.362 15:52:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.362 15:52:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.362 15:52:57 json_config -- json_config/common.sh@10 -- # shift 00:05:29.362 15:52:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.362 15:52:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.362 15:52:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.362 15:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.362 15:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.362 15:52:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2630493 00:05:29.362 15:52:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.362 Waiting for target to run... 00:05:29.362 15:52:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.363 15:52:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2630493 /var/tmp/spdk_tgt.sock 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 2630493 ']' 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.363 15:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.363 [2024-12-15 15:52:57.894918] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:29.363 [2024-12-15 15:52:57.894980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630493 ] 00:05:29.931 [2024-12-15 15:52:58.335708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.931 [2024-12-15 15:52:58.366955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.222 [2024-12-15 15:53:01.414265] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x107c8f0/0x1017000) succeed. 00:05:33.222 [2024-12-15 15:53:01.425737] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x107fb40/0x10ac080) succeed. 00:05:33.222 [2024-12-15 15:53:01.474633] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:33.790 15:53:02 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.790 15:53:02 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:33.790 15:53:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.790 00:05:33.790 15:53:02 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:33.790 15:53:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.790 INFO: Checking if target configuration is the same... 00:05:33.790 15:53:02 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.790 15:53:02 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:33.790 15:53:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.790 + '[' 2 -ne 2 ']' 00:05:33.790 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.790 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:33.790 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:33.790 +++ basename /dev/fd/62 00:05:33.790 ++ mktemp /tmp/62.XXX 00:05:33.790 + tmp_file_1=/tmp/62.GRU 00:05:33.790 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.790 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.790 + tmp_file_2=/tmp/spdk_tgt_config.json.dkF 00:05:33.790 + ret=0 00:05:33.790 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.050 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.050 + diff -u /tmp/62.GRU /tmp/spdk_tgt_config.json.dkF 00:05:34.050 + echo 'INFO: JSON config files are the same' 00:05:34.050 INFO: JSON config files are the same 00:05:34.050 + rm /tmp/62.GRU /tmp/spdk_tgt_config.json.dkF 00:05:34.050 + exit 0 00:05:34.050 15:53:02 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:34.050 15:53:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:34.050 INFO: changing configuration and checking if this can be detected... 00:05:34.050 15:53:02 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.050 15:53:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.356 15:53:02 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.356 15:53:02 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:34.356 15:53:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.356 + '[' 2 -ne 2 ']' 00:05:34.356 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:34.356 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:34.356 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:34.356 +++ basename /dev/fd/62 00:05:34.356 ++ mktemp /tmp/62.XXX 00:05:34.356 + tmp_file_1=/tmp/62.hTj 00:05:34.356 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.356 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.356 + tmp_file_2=/tmp/spdk_tgt_config.json.ags 00:05:34.356 + ret=0 00:05:34.356 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.629 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.629 + diff -u /tmp/62.hTj /tmp/spdk_tgt_config.json.ags 00:05:34.629 + ret=1 00:05:34.629 + echo '=== Start of file: /tmp/62.hTj ===' 00:05:34.629 + cat /tmp/62.hTj 00:05:34.629 + echo '=== End of file: /tmp/62.hTj ===' 00:05:34.629 + echo '' 00:05:34.629 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ags ===' 00:05:34.629 + cat /tmp/spdk_tgt_config.json.ags 00:05:34.629 + echo '=== End of file: /tmp/spdk_tgt_config.json.ags ===' 00:05:34.629 + echo '' 00:05:34.629 + rm /tmp/62.hTj /tmp/spdk_tgt_config.json.ags 00:05:34.629 + exit 1 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:34.629 INFO: configuration change detected. 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@324 -- # [[ -n 2630493 ]] 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 15:53:03 json_config -- json_config/json_config.sh@330 -- # killprocess 2630493 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@950 -- # '[' -z 2630493 ']' 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@954 -- # kill -0 2630493 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@955 -- # uname 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2630493 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2630493' 00:05:34.629 killing process with pid 2630493 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@969 -- # kill 2630493 00:05:34.629 15:53:03 json_config -- common/autotest_common.sh@974 -- # wait 2630493 00:05:37.164 15:53:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.164 15:53:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:37.164 15:53:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.164 15:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.164 15:53:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:37.164 15:53:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:37.164 INFO: Success 00:05:37.164 15:53:05 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@121 -- # sync 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:37.164 15:53:05 json_config -- nvmf/common.sh@519 -- # [[ '' == \t\c\p ]] 00:05:37.164 00:05:37.164 real 0m24.785s 00:05:37.164 user 0m27.528s 00:05:37.164 sys 0m7.651s 00:05:37.164 15:53:05 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.164 15:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.164 ************************************ 00:05:37.164 END TEST json_config 00:05:37.164 ************************************ 00:05:37.424 15:53:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.424 15:53:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.424 15:53:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.424 15:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:37.424 ************************************ 00:05:37.424 START TEST json_config_extra_key 00:05:37.424 ************************************ 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.424 15:53:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.424 --rc genhtml_branch_coverage=1 00:05:37.424 --rc genhtml_function_coverage=1 00:05:37.424 --rc genhtml_legend=1 00:05:37.424 --rc geninfo_all_blocks=1 00:05:37.424 --rc geninfo_unexecuted_blocks=1 00:05:37.424 00:05:37.424 ' 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.424 --rc genhtml_branch_coverage=1 00:05:37.424 --rc genhtml_function_coverage=1 00:05:37.424 --rc genhtml_legend=1 00:05:37.424 --rc geninfo_all_blocks=1 00:05:37.424 --rc geninfo_unexecuted_blocks=1 00:05:37.424 00:05:37.424 ' 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.424 --rc genhtml_branch_coverage=1 00:05:37.424 --rc genhtml_function_coverage=1 00:05:37.424 --rc genhtml_legend=1 00:05:37.424 --rc geninfo_all_blocks=1 00:05:37.424 --rc geninfo_unexecuted_blocks=1 00:05:37.424 00:05:37.424 ' 00:05:37.424 15:53:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.424 --rc genhtml_branch_coverage=1 00:05:37.424 --rc genhtml_function_coverage=1 00:05:37.424 --rc genhtml_legend=1 00:05:37.424 --rc geninfo_all_blocks=1 00:05:37.424 --rc geninfo_unexecuted_blocks=1 00:05:37.424 00:05:37.424 ' 00:05:37.424 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:37.424 15:53:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:37.425 15:53:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.425 15:53:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.425 15:53:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.425 15:53:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.425 15:53:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.425 15:53:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.425 15:53:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.425 15:53:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:37.425 15:53:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.425 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.425 15:53:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:37.425 INFO: launching applications... 00:05:37.425 15:53:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2632049 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.425 Waiting for target to run... 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2632049 /var/tmp/spdk_tgt.sock 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2632049 ']' 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.425 15:53:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.425 15:53:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.684 [2024-12-15 15:53:06.015263] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:37.684 [2024-12-15 15:53:06.015316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632049 ] 00:05:37.944 [2024-12-15 15:53:06.304174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.944 [2024-12-15 15:53:06.325629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.512 15:53:06 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.512 15:53:06 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:38.512 00:05:38.512 15:53:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:38.512 INFO: shutting down applications... 00:05:38.512 15:53:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2632049 ]] 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2632049 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2632049 00:05:38.512 15:53:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2632049 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.771 15:53:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.771 SPDK target shutdown done 00:05:38.772 15:53:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:38.772 Success 00:05:38.772 00:05:38.772 real 0m1.544s 00:05:38.772 user 0m1.281s 00:05:38.772 sys 0m0.432s 00:05:38.772 15:53:07 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.772 15:53:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.772 ************************************ 00:05:38.772 END TEST json_config_extra_key 00:05:38.772 ************************************ 00:05:39.031 15:53:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.031 15:53:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.031 15:53:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.031 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:05:39.031 ************************************ 00:05:39.031 START TEST alias_rpc 00:05:39.031 ************************************ 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.031 * Looking for test storage... 00:05:39.031 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.031 15:53:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.031 --rc genhtml_branch_coverage=1 00:05:39.031 --rc genhtml_function_coverage=1 00:05:39.031 --rc genhtml_legend=1 00:05:39.031 --rc geninfo_all_blocks=1 00:05:39.031 --rc geninfo_unexecuted_blocks=1 00:05:39.031 00:05:39.031 ' 00:05:39.031 15:53:07 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.031 --rc genhtml_branch_coverage=1 00:05:39.031 --rc genhtml_function_coverage=1 00:05:39.031 --rc genhtml_legend=1 00:05:39.031 --rc geninfo_all_blocks=1 00:05:39.031 --rc geninfo_unexecuted_blocks=1 00:05:39.031 00:05:39.031 ' 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.032 --rc genhtml_branch_coverage=1 00:05:39.032 --rc genhtml_function_coverage=1 00:05:39.032 --rc genhtml_legend=1 00:05:39.032 --rc geninfo_all_blocks=1 00:05:39.032 --rc geninfo_unexecuted_blocks=1 00:05:39.032 00:05:39.032 ' 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.032 --rc genhtml_branch_coverage=1 00:05:39.032 --rc genhtml_function_coverage=1 00:05:39.032 --rc genhtml_legend=1 00:05:39.032 --rc geninfo_all_blocks=1 00:05:39.032 --rc geninfo_unexecuted_blocks=1 00:05:39.032 00:05:39.032 ' 00:05:39.032 15:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.032 15:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2632368 00:05:39.032 15:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2632368 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2632368 ']' 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.032 15:53:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.032 15:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.291 [2024-12-15 15:53:07.647556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.291 [2024-12-15 15:53:07.647604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632368 ] 00:05:39.291 [2024-12-15 15:53:07.717113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.291 [2024-12-15 15:53:07.756241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.551 15:53:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.551 15:53:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.551 15:53:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:39.810 15:53:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2632368 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2632368 ']' 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2632368 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2632368 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2632368' 00:05:39.810 killing process with pid 2632368 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@969 -- # kill 2632368 00:05:39.810 15:53:08 alias_rpc -- common/autotest_common.sh@974 -- # wait 2632368 00:05:40.070 00:05:40.070 real 0m1.123s 00:05:40.070 user 0m1.084s 00:05:40.070 sys 0m0.472s 00:05:40.070 15:53:08 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.070 15:53:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.070 ************************************ 00:05:40.070 END TEST alias_rpc 00:05:40.070 ************************************ 00:05:40.070 15:53:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:40.070 15:53:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:40.070 15:53:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.070 15:53:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.070 15:53:08 -- common/autotest_common.sh@10 -- # set +x 00:05:40.070 ************************************ 00:05:40.070 START TEST spdkcli_tcp 00:05:40.070 ************************************ 00:05:40.070 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:40.330 * Looking for test storage... 00:05:40.330 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.330 15:53:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.330 --rc genhtml_branch_coverage=1 00:05:40.330 --rc genhtml_function_coverage=1 00:05:40.330 --rc genhtml_legend=1 00:05:40.330 --rc geninfo_all_blocks=1 00:05:40.330 --rc geninfo_unexecuted_blocks=1 00:05:40.330 00:05:40.330 ' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.330 --rc genhtml_branch_coverage=1 00:05:40.330 --rc genhtml_function_coverage=1 00:05:40.330 --rc genhtml_legend=1 00:05:40.330 --rc geninfo_all_blocks=1 00:05:40.330 --rc geninfo_unexecuted_blocks=1 00:05:40.330 00:05:40.330 ' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.330 --rc genhtml_branch_coverage=1 00:05:40.330 --rc genhtml_function_coverage=1 00:05:40.330 --rc genhtml_legend=1 00:05:40.330 --rc geninfo_all_blocks=1 00:05:40.330 --rc geninfo_unexecuted_blocks=1 00:05:40.330 00:05:40.330 ' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.330 --rc genhtml_branch_coverage=1 00:05:40.330 --rc genhtml_function_coverage=1 00:05:40.330 --rc genhtml_legend=1 00:05:40.330 --rc geninfo_all_blocks=1 00:05:40.330 --rc geninfo_unexecuted_blocks=1 00:05:40.330 00:05:40.330 ' 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2632689 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2632689 00:05:40.330 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2632689 ']' 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.330 15:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.330 [2024-12-15 15:53:08.870397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:40.330 [2024-12-15 15:53:08.870448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632689 ] 00:05:40.590 [2024-12-15 15:53:08.939920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.590 [2024-12-15 15:53:08.980121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.590 [2024-12-15 15:53:08.980125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.849 15:53:09 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.849 15:53:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:40.849 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2632707 00:05:40.849 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.849 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.849 [ 00:05:40.849 "bdev_malloc_delete", 00:05:40.849 "bdev_malloc_create", 00:05:40.849 "bdev_null_resize", 00:05:40.849 "bdev_null_delete", 00:05:40.849 "bdev_null_create", 00:05:40.849 "bdev_nvme_cuse_unregister", 00:05:40.849 "bdev_nvme_cuse_register", 00:05:40.849 "bdev_opal_new_user", 00:05:40.849 "bdev_opal_set_lock_state", 00:05:40.849 "bdev_opal_delete", 00:05:40.849 "bdev_opal_get_info", 00:05:40.849 "bdev_opal_create", 00:05:40.849 "bdev_nvme_opal_revert", 00:05:40.849 "bdev_nvme_opal_init", 00:05:40.849 "bdev_nvme_send_cmd", 00:05:40.849 "bdev_nvme_set_keys", 00:05:40.849 "bdev_nvme_get_path_iostat", 00:05:40.849 "bdev_nvme_get_mdns_discovery_info", 00:05:40.849 "bdev_nvme_stop_mdns_discovery", 00:05:40.849 "bdev_nvme_start_mdns_discovery", 00:05:40.849 "bdev_nvme_set_multipath_policy", 00:05:40.849 "bdev_nvme_set_preferred_path", 00:05:40.849 "bdev_nvme_get_io_paths", 00:05:40.849 "bdev_nvme_remove_error_injection", 00:05:40.849 "bdev_nvme_add_error_injection", 00:05:40.849 "bdev_nvme_get_discovery_info", 00:05:40.849 "bdev_nvme_stop_discovery", 00:05:40.849 "bdev_nvme_start_discovery", 00:05:40.849 "bdev_nvme_get_controller_health_info", 00:05:40.849 "bdev_nvme_disable_controller", 00:05:40.849 "bdev_nvme_enable_controller", 00:05:40.849 "bdev_nvme_reset_controller", 00:05:40.849 "bdev_nvme_get_transport_statistics", 00:05:40.849 "bdev_nvme_apply_firmware", 00:05:40.849 "bdev_nvme_detach_controller", 00:05:40.849 "bdev_nvme_get_controllers", 00:05:40.849 "bdev_nvme_attach_controller", 00:05:40.849 "bdev_nvme_set_hotplug", 00:05:40.849 "bdev_nvme_set_options", 00:05:40.849 "bdev_passthru_delete", 00:05:40.849 "bdev_passthru_create", 00:05:40.849 "bdev_lvol_set_parent_bdev", 00:05:40.849 "bdev_lvol_set_parent", 00:05:40.849 "bdev_lvol_check_shallow_copy", 00:05:40.849 "bdev_lvol_start_shallow_copy", 00:05:40.849 "bdev_lvol_grow_lvstore", 00:05:40.849 "bdev_lvol_get_lvols", 00:05:40.849 "bdev_lvol_get_lvstores", 00:05:40.849 "bdev_lvol_delete", 00:05:40.849 "bdev_lvol_set_read_only", 00:05:40.849 "bdev_lvol_resize", 00:05:40.849 "bdev_lvol_decouple_parent", 00:05:40.849 "bdev_lvol_inflate", 00:05:40.849 "bdev_lvol_rename", 00:05:40.849 "bdev_lvol_clone_bdev", 00:05:40.849 "bdev_lvol_clone", 00:05:40.849 "bdev_lvol_snapshot", 00:05:40.849 "bdev_lvol_create", 00:05:40.849 "bdev_lvol_delete_lvstore", 00:05:40.849 "bdev_lvol_rename_lvstore", 00:05:40.849 "bdev_lvol_create_lvstore", 00:05:40.849 "bdev_raid_set_options", 00:05:40.849 "bdev_raid_remove_base_bdev", 00:05:40.849 "bdev_raid_add_base_bdev", 00:05:40.849 "bdev_raid_delete", 00:05:40.849 "bdev_raid_create", 00:05:40.849 "bdev_raid_get_bdevs", 00:05:40.849 "bdev_error_inject_error", 00:05:40.849 "bdev_error_delete", 00:05:40.849 "bdev_error_create", 00:05:40.849 "bdev_split_delete", 00:05:40.849 "bdev_split_create", 00:05:40.849 "bdev_delay_delete", 00:05:40.849 "bdev_delay_create", 00:05:40.849 "bdev_delay_update_latency", 00:05:40.849 "bdev_zone_block_delete", 00:05:40.849 "bdev_zone_block_create", 00:05:40.849 "blobfs_create", 00:05:40.849 "blobfs_detect", 00:05:40.849 "blobfs_set_cache_size", 00:05:40.849 "bdev_aio_delete", 00:05:40.849 "bdev_aio_rescan", 00:05:40.850 "bdev_aio_create", 00:05:40.850 "bdev_ftl_set_property", 00:05:40.850 "bdev_ftl_get_properties", 00:05:40.850 "bdev_ftl_get_stats", 00:05:40.850 "bdev_ftl_unmap", 00:05:40.850 "bdev_ftl_unload", 00:05:40.850 "bdev_ftl_delete", 00:05:40.850 "bdev_ftl_load", 00:05:40.850 "bdev_ftl_create", 00:05:40.850 "bdev_virtio_attach_controller", 00:05:40.850 "bdev_virtio_scsi_get_devices", 00:05:40.850 "bdev_virtio_detach_controller", 00:05:40.850 "bdev_virtio_blk_set_hotplug", 00:05:40.850 "bdev_iscsi_delete", 00:05:40.850 "bdev_iscsi_create", 00:05:40.850 "bdev_iscsi_set_options", 00:05:40.850 "accel_error_inject_error", 00:05:40.850 "ioat_scan_accel_module", 00:05:40.850 "dsa_scan_accel_module", 00:05:40.850 "iaa_scan_accel_module", 00:05:40.850 "keyring_file_remove_key", 00:05:40.850 "keyring_file_add_key", 00:05:40.850 "keyring_linux_set_options", 00:05:40.850 "fsdev_aio_delete", 00:05:40.850 "fsdev_aio_create", 00:05:40.850 "iscsi_get_histogram", 00:05:40.850 "iscsi_enable_histogram", 00:05:40.850 "iscsi_set_options", 00:05:40.850 "iscsi_get_auth_groups", 00:05:40.850 "iscsi_auth_group_remove_secret", 00:05:40.850 "iscsi_auth_group_add_secret", 00:05:40.850 "iscsi_delete_auth_group", 00:05:40.850 "iscsi_create_auth_group", 00:05:40.850 "iscsi_set_discovery_auth", 00:05:40.850 "iscsi_get_options", 00:05:40.850 "iscsi_target_node_request_logout", 00:05:40.850 "iscsi_target_node_set_redirect", 00:05:40.850 "iscsi_target_node_set_auth", 00:05:40.850 "iscsi_target_node_add_lun", 00:05:40.850 "iscsi_get_stats", 00:05:40.850 "iscsi_get_connections", 00:05:40.850 "iscsi_portal_group_set_auth", 00:05:40.850 "iscsi_start_portal_group", 00:05:40.850 "iscsi_delete_portal_group", 00:05:40.850 "iscsi_create_portal_group", 00:05:40.850 "iscsi_get_portal_groups", 00:05:40.850 "iscsi_delete_target_node", 00:05:40.850 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.850 "iscsi_target_node_add_pg_ig_maps", 00:05:40.850 "iscsi_create_target_node", 00:05:40.850 "iscsi_get_target_nodes", 00:05:40.850 "iscsi_delete_initiator_group", 00:05:40.850 "iscsi_initiator_group_remove_initiators", 00:05:40.850 "iscsi_initiator_group_add_initiators", 00:05:40.850 "iscsi_create_initiator_group", 00:05:40.850 "iscsi_get_initiator_groups", 00:05:40.850 "nvmf_set_crdt", 00:05:40.850 "nvmf_set_config", 00:05:40.850 "nvmf_set_max_subsystems", 00:05:40.850 "nvmf_stop_mdns_prr", 00:05:40.850 "nvmf_publish_mdns_prr", 00:05:40.850 "nvmf_subsystem_get_listeners", 00:05:40.850 "nvmf_subsystem_get_qpairs", 00:05:40.850 "nvmf_subsystem_get_controllers", 00:05:40.850 "nvmf_get_stats", 00:05:40.850 "nvmf_get_transports", 00:05:40.850 "nvmf_create_transport", 00:05:40.850 "nvmf_get_targets", 00:05:40.850 "nvmf_delete_target", 00:05:40.850 "nvmf_create_target", 00:05:40.850 "nvmf_subsystem_allow_any_host", 00:05:40.850 "nvmf_subsystem_set_keys", 00:05:40.850 "nvmf_subsystem_remove_host", 00:05:40.850 "nvmf_subsystem_add_host", 00:05:40.850 "nvmf_ns_remove_host", 00:05:40.850 "nvmf_ns_add_host", 00:05:40.850 "nvmf_subsystem_remove_ns", 00:05:40.850 "nvmf_subsystem_set_ns_ana_group", 00:05:40.850 "nvmf_subsystem_add_ns", 00:05:40.850 "nvmf_subsystem_listener_set_ana_state", 00:05:40.850 "nvmf_discovery_get_referrals", 00:05:40.850 "nvmf_discovery_remove_referral", 00:05:40.850 "nvmf_discovery_add_referral", 00:05:40.850 "nvmf_subsystem_remove_listener", 00:05:40.850 "nvmf_subsystem_add_listener", 00:05:40.850 "nvmf_delete_subsystem", 00:05:40.850 "nvmf_create_subsystem", 00:05:40.850 "nvmf_get_subsystems", 00:05:40.850 "env_dpdk_get_mem_stats", 00:05:40.850 "nbd_get_disks", 00:05:40.850 "nbd_stop_disk", 00:05:40.850 "nbd_start_disk", 00:05:40.850 "ublk_recover_disk", 00:05:40.850 "ublk_get_disks", 00:05:40.850 "ublk_stop_disk", 00:05:40.850 "ublk_start_disk", 00:05:40.850 "ublk_destroy_target", 00:05:40.850 "ublk_create_target", 00:05:40.850 "virtio_blk_create_transport", 00:05:40.850 "virtio_blk_get_transports", 00:05:40.850 "vhost_controller_set_coalescing", 00:05:40.850 "vhost_get_controllers", 00:05:40.850 "vhost_delete_controller", 00:05:40.850 "vhost_create_blk_controller", 00:05:40.850 "vhost_scsi_controller_remove_target", 00:05:40.850 "vhost_scsi_controller_add_target", 00:05:40.850 "vhost_start_scsi_controller", 00:05:40.850 "vhost_create_scsi_controller", 00:05:40.850 "thread_set_cpumask", 00:05:40.850 "scheduler_set_options", 00:05:40.850 "framework_get_governor", 00:05:40.850 "framework_get_scheduler", 00:05:40.850 "framework_set_scheduler", 00:05:40.850 "framework_get_reactors", 00:05:40.850 "thread_get_io_channels", 00:05:40.850 "thread_get_pollers", 00:05:40.850 "thread_get_stats", 00:05:40.850 "framework_monitor_context_switch", 00:05:40.850 "spdk_kill_instance", 00:05:40.850 "log_enable_timestamps", 00:05:40.850 "log_get_flags", 00:05:40.850 "log_clear_flag", 00:05:40.850 "log_set_flag", 00:05:40.850 "log_get_level", 00:05:40.850 "log_set_level", 00:05:40.850 "log_get_print_level", 00:05:40.850 "log_set_print_level", 00:05:40.850 "framework_enable_cpumask_locks", 00:05:40.850 "framework_disable_cpumask_locks", 00:05:40.850 "framework_wait_init", 00:05:40.850 "framework_start_init", 00:05:40.850 "scsi_get_devices", 00:05:40.850 "bdev_get_histogram", 00:05:40.850 "bdev_enable_histogram", 00:05:40.850 "bdev_set_qos_limit", 00:05:40.850 "bdev_set_qd_sampling_period", 00:05:40.850 "bdev_get_bdevs", 00:05:40.850 "bdev_reset_iostat", 00:05:40.850 "bdev_get_iostat", 00:05:40.850 "bdev_examine", 00:05:40.850 "bdev_wait_for_examine", 00:05:40.850 "bdev_set_options", 00:05:40.850 "accel_get_stats", 00:05:40.850 "accel_set_options", 00:05:40.850 "accel_set_driver", 00:05:40.850 "accel_crypto_key_destroy", 00:05:40.850 "accel_crypto_keys_get", 00:05:40.850 "accel_crypto_key_create", 00:05:40.850 "accel_assign_opc", 00:05:40.850 "accel_get_module_info", 00:05:40.850 "accel_get_opc_assignments", 00:05:40.850 "vmd_rescan", 00:05:40.850 "vmd_remove_device", 00:05:40.850 "vmd_enable", 00:05:40.850 "sock_get_default_impl", 00:05:40.850 "sock_set_default_impl", 00:05:40.850 "sock_impl_set_options", 00:05:40.850 "sock_impl_get_options", 00:05:40.850 "iobuf_get_stats", 00:05:40.850 "iobuf_set_options", 00:05:40.850 "keyring_get_keys", 00:05:40.850 "framework_get_pci_devices", 00:05:40.850 "framework_get_config", 00:05:40.850 "framework_get_subsystems", 00:05:40.850 "fsdev_set_opts", 00:05:40.850 "fsdev_get_opts", 00:05:40.850 "trace_get_info", 00:05:40.850 "trace_get_tpoint_group_mask", 00:05:40.850 "trace_disable_tpoint_group", 00:05:40.850 "trace_enable_tpoint_group", 00:05:40.850 "trace_clear_tpoint_mask", 00:05:40.850 "trace_set_tpoint_mask", 00:05:40.850 "notify_get_notifications", 00:05:40.850 "notify_get_types", 00:05:40.850 "spdk_get_version", 00:05:40.850 "rpc_get_methods" 00:05:40.850 ] 00:05:40.850 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.850 15:53:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.850 15:53:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.110 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:41.110 15:53:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2632689 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2632689 ']' 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2632689 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2632689 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2632689' 00:05:41.110 killing process with pid 2632689 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2632689 00:05:41.110 15:53:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2632689 00:05:41.369 00:05:41.369 real 0m1.185s 00:05:41.369 user 0m1.930s 00:05:41.369 sys 0m0.515s 00:05:41.369 15:53:09 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.369 15:53:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.369 ************************************ 00:05:41.369 END TEST spdkcli_tcp 00:05:41.369 ************************************ 00:05:41.369 15:53:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.369 15:53:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.369 15:53:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.369 15:53:09 -- common/autotest_common.sh@10 -- # set +x 00:05:41.369 ************************************ 00:05:41.369 START TEST dpdk_mem_utility 00:05:41.369 ************************************ 00:05:41.369 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.629 * Looking for test storage... 00:05:41.629 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:41.629 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.629 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.629 15:53:09 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.629 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.629 15:53:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.630 15:53:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.630 --rc genhtml_branch_coverage=1 00:05:41.630 --rc genhtml_function_coverage=1 00:05:41.630 --rc genhtml_legend=1 00:05:41.630 --rc geninfo_all_blocks=1 00:05:41.630 --rc geninfo_unexecuted_blocks=1 00:05:41.630 00:05:41.630 ' 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.630 --rc genhtml_branch_coverage=1 00:05:41.630 --rc genhtml_function_coverage=1 00:05:41.630 --rc genhtml_legend=1 00:05:41.630 --rc geninfo_all_blocks=1 00:05:41.630 --rc geninfo_unexecuted_blocks=1 00:05:41.630 00:05:41.630 ' 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.630 --rc genhtml_branch_coverage=1 00:05:41.630 --rc genhtml_function_coverage=1 00:05:41.630 --rc genhtml_legend=1 00:05:41.630 --rc geninfo_all_blocks=1 00:05:41.630 --rc geninfo_unexecuted_blocks=1 00:05:41.630 00:05:41.630 ' 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.630 --rc genhtml_branch_coverage=1 00:05:41.630 --rc genhtml_function_coverage=1 00:05:41.630 --rc genhtml_legend=1 00:05:41.630 --rc geninfo_all_blocks=1 00:05:41.630 --rc geninfo_unexecuted_blocks=1 00:05:41.630 00:05:41.630 ' 00:05:41.630 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:41.630 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2632953 00:05:41.630 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2632953 00:05:41.630 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2632953 ']' 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.630 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.630 [2024-12-15 15:53:10.120644] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:41.630 [2024-12-15 15:53:10.120708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632953 ] 00:05:41.630 [2024-12-15 15:53:10.192048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.889 [2024-12-15 15:53:10.230873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.889 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.889 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:41.889 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:41.889 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:41.889 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.889 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.889 { 00:05:41.889 "filename": "/tmp/spdk_mem_dump.txt" 00:05:41.889 } 00:05:41.889 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.889 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:42.150 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:42.150 1 heaps totaling size 860.000000 MiB 00:05:42.150 size: 860.000000 MiB heap id: 0 00:05:42.150 end heaps---------- 00:05:42.150 9 mempools totaling size 642.649841 MiB 00:05:42.150 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.150 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.150 size: 92.545471 MiB name: bdev_io_2632953 00:05:42.150 size: 51.011292 MiB name: evtpool_2632953 00:05:42.150 size: 50.003479 MiB name: msgpool_2632953 00:05:42.150 size: 36.509338 MiB name: fsdev_io_2632953 00:05:42.150 size: 21.763794 MiB name: PDU_Pool 00:05:42.150 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.150 size: 0.026123 MiB name: Session_Pool 00:05:42.150 end mempools------- 00:05:42.150 6 memzones totaling size 4.142822 MiB 00:05:42.150 size: 1.000366 MiB name: RG_ring_0_2632953 00:05:42.150 size: 1.000366 MiB name: RG_ring_1_2632953 00:05:42.150 size: 1.000366 MiB name: RG_ring_4_2632953 00:05:42.150 size: 1.000366 MiB name: RG_ring_5_2632953 00:05:42.150 size: 0.125366 MiB name: RG_ring_2_2632953 00:05:42.150 size: 0.015991 MiB name: RG_ring_3_2632953 00:05:42.150 end memzones------- 00:05:42.150 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.150 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:42.150 list of free elements. size: 13.984680 MiB 00:05:42.150 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:42.150 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:42.150 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:42.150 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:42.150 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:42.150 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:42.150 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:42.150 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:42.150 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:42.150 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:42.150 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:42.150 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:42.150 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:42.150 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:42.150 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:42.150 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:42.150 list of standard malloc elements. size: 199.218628 MiB 00:05:42.150 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:42.150 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:42.150 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:42.150 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:42.150 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:42.150 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.150 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:42.150 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.150 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:42.150 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:42.150 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:42.150 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:42.150 list of memzone associated elements. size: 646.796692 MiB 00:05:42.150 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:42.150 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.150 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:42.150 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.150 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:42.150 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2632953_0 00:05:42.150 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:42.150 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2632953_0 00:05:42.150 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:42.150 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2632953_0 00:05:42.150 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:42.150 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2632953_0 00:05:42.150 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:42.150 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.150 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:42.150 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.150 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:42.150 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2632953 00:05:42.150 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:42.150 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2632953 00:05:42.150 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.150 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2632953 00:05:42.150 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:42.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.150 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:42.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.150 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:42.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.150 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:42.150 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.150 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:42.150 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2632953 00:05:42.150 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:42.151 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2632953 00:05:42.151 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:42.151 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2632953 00:05:42.151 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:42.151 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2632953 00:05:42.151 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:42.151 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2632953 00:05:42.151 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:42.151 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2632953 00:05:42.151 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:42.151 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.151 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:42.151 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.151 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:42.151 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.151 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:42.151 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2632953 00:05:42.151 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:42.151 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.151 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:42.151 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.151 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:42.151 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2632953 00:05:42.151 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:42.151 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.151 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:42.151 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2632953 00:05:42.151 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:42.151 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2632953 00:05:42.151 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:42.151 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2632953 00:05:42.151 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:42.151 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.151 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.151 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2632953 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2632953 ']' 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2632953 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2632953 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2632953' 00:05:42.151 killing process with pid 2632953 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2632953 00:05:42.151 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2632953 00:05:42.410 00:05:42.410 real 0m1.041s 00:05:42.410 user 0m0.955s 00:05:42.410 sys 0m0.450s 00:05:42.410 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.410 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.410 ************************************ 00:05:42.410 END TEST dpdk_mem_utility 00:05:42.410 ************************************ 00:05:42.410 15:53:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.410 15:53:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.410 15:53:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.411 15:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 ************************************ 00:05:42.670 START TEST event 00:05:42.670 ************************************ 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.670 * Looking for test storage... 00:05:42.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.670 15:53:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.670 15:53:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.670 15:53:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.670 15:53:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.670 15:53:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.670 15:53:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.670 15:53:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.670 15:53:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.670 15:53:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.670 15:53:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.670 15:53:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.670 15:53:11 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.670 15:53:11 event -- scripts/common.sh@345 -- # : 1 00:05:42.670 15:53:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.670 15:53:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.670 15:53:11 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.670 15:53:11 event -- scripts/common.sh@353 -- # local d=1 00:05:42.670 15:53:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.670 15:53:11 event -- scripts/common.sh@355 -- # echo 1 00:05:42.670 15:53:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.670 15:53:11 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.670 15:53:11 event -- scripts/common.sh@353 -- # local d=2 00:05:42.670 15:53:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.670 15:53:11 event -- scripts/common.sh@355 -- # echo 2 00:05:42.670 15:53:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.670 15:53:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.670 15:53:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.670 15:53:11 event -- scripts/common.sh@368 -- # return 0 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.670 --rc genhtml_branch_coverage=1 00:05:42.670 --rc genhtml_function_coverage=1 00:05:42.670 --rc genhtml_legend=1 00:05:42.670 --rc geninfo_all_blocks=1 00:05:42.670 --rc geninfo_unexecuted_blocks=1 00:05:42.670 00:05:42.670 ' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.670 --rc genhtml_branch_coverage=1 00:05:42.670 --rc genhtml_function_coverage=1 00:05:42.670 --rc genhtml_legend=1 00:05:42.670 --rc geninfo_all_blocks=1 00:05:42.670 --rc geninfo_unexecuted_blocks=1 00:05:42.670 00:05:42.670 ' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.670 --rc genhtml_branch_coverage=1 00:05:42.670 --rc genhtml_function_coverage=1 00:05:42.670 --rc genhtml_legend=1 00:05:42.670 --rc geninfo_all_blocks=1 00:05:42.670 --rc geninfo_unexecuted_blocks=1 00:05:42.670 00:05:42.670 ' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.670 --rc genhtml_branch_coverage=1 00:05:42.670 --rc genhtml_function_coverage=1 00:05:42.670 --rc genhtml_legend=1 00:05:42.670 --rc geninfo_all_blocks=1 00:05:42.670 --rc geninfo_unexecuted_blocks=1 00:05:42.670 00:05:42.670 ' 00:05:42.670 15:53:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:42.670 15:53:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.670 15:53:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:42.670 15:53:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.670 15:53:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 ************************************ 00:05:42.670 START TEST event_perf 00:05:42.670 ************************************ 00:05:42.671 15:53:11 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.930 Running I/O for 1 seconds...[2024-12-15 15:53:11.241879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:42.930 [2024-12-15 15:53:11.241964] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633114 ] 00:05:42.930 [2024-12-15 15:53:11.314650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.930 [2024-12-15 15:53:11.355853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.930 [2024-12-15 15:53:11.355947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.930 [2024-12-15 15:53:11.356007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.930 [2024-12-15 15:53:11.356009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.868 Running I/O for 1 seconds... 00:05:43.868 lcore 0: 214524 00:05:43.868 lcore 1: 214525 00:05:43.868 lcore 2: 214526 00:05:43.868 lcore 3: 214524 00:05:43.868 done. 00:05:43.868 00:05:43.868 real 0m1.195s 00:05:43.868 user 0m4.094s 00:05:43.868 sys 0m0.097s 00:05:43.868 15:53:12 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.868 15:53:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.868 ************************************ 00:05:43.868 END TEST event_perf 00:05:43.868 ************************************ 00:05:44.127 15:53:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:44.127 15:53:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:44.127 15:53:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.127 15:53:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.127 ************************************ 00:05:44.127 START TEST event_reactor 00:05:44.127 ************************************ 00:05:44.127 15:53:12 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:44.127 [2024-12-15 15:53:12.521700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:44.127 [2024-12-15 15:53:12.521769] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633401 ] 00:05:44.127 [2024-12-15 15:53:12.596453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.127 [2024-12-15 15:53:12.634323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.507 test_start 00:05:45.507 oneshot 00:05:45.507 tick 100 00:05:45.507 tick 100 00:05:45.507 tick 250 00:05:45.507 tick 100 00:05:45.507 tick 100 00:05:45.507 tick 100 00:05:45.507 tick 250 00:05:45.507 tick 500 00:05:45.507 tick 100 00:05:45.507 tick 100 00:05:45.507 tick 250 00:05:45.507 tick 100 00:05:45.507 tick 100 00:05:45.507 test_end 00:05:45.507 00:05:45.507 real 0m1.191s 00:05:45.507 user 0m1.105s 00:05:45.507 sys 0m0.082s 00:05:45.507 15:53:13 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.507 15:53:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.507 ************************************ 00:05:45.507 END TEST event_reactor 00:05:45.507 ************************************ 00:05:45.507 15:53:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.507 15:53:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:45.507 15:53:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.507 15:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.507 ************************************ 00:05:45.507 START TEST event_reactor_perf 00:05:45.507 ************************************ 00:05:45.507 15:53:13 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.507 [2024-12-15 15:53:13.779914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.507 [2024-12-15 15:53:13.779987] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633684 ] 00:05:45.507 [2024-12-15 15:53:13.849530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.507 [2024-12-15 15:53:13.887154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.445 test_start 00:05:46.445 test_end 00:05:46.445 Performance: 533033 events per second 00:05:46.445 00:05:46.445 real 0m1.180s 00:05:46.445 user 0m1.088s 00:05:46.445 sys 0m0.087s 00:05:46.445 15:53:14 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.445 15:53:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.445 ************************************ 00:05:46.445 END TEST event_reactor_perf 00:05:46.445 ************************************ 00:05:46.445 15:53:14 event -- event/event.sh@49 -- # uname -s 00:05:46.445 15:53:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.445 15:53:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.445 15:53:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.445 15:53:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.445 15:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.712 ************************************ 00:05:46.712 START TEST event_scheduler 00:05:46.712 ************************************ 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.712 * Looking for test storage... 00:05:46.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.712 15:53:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.712 --rc genhtml_branch_coverage=1 00:05:46.712 --rc genhtml_function_coverage=1 00:05:46.712 --rc genhtml_legend=1 00:05:46.712 --rc geninfo_all_blocks=1 00:05:46.712 --rc geninfo_unexecuted_blocks=1 00:05:46.712 00:05:46.712 ' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.712 --rc genhtml_branch_coverage=1 00:05:46.712 --rc genhtml_function_coverage=1 00:05:46.712 --rc genhtml_legend=1 00:05:46.712 --rc geninfo_all_blocks=1 00:05:46.712 --rc geninfo_unexecuted_blocks=1 00:05:46.712 00:05:46.712 ' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.712 --rc genhtml_branch_coverage=1 00:05:46.712 --rc genhtml_function_coverage=1 00:05:46.712 --rc genhtml_legend=1 00:05:46.712 --rc geninfo_all_blocks=1 00:05:46.712 --rc geninfo_unexecuted_blocks=1 00:05:46.712 00:05:46.712 ' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.712 --rc genhtml_branch_coverage=1 00:05:46.712 --rc genhtml_function_coverage=1 00:05:46.712 --rc genhtml_legend=1 00:05:46.712 --rc geninfo_all_blocks=1 00:05:46.712 --rc geninfo_unexecuted_blocks=1 00:05:46.712 00:05:46.712 ' 00:05:46.712 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.712 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.712 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2634010 00:05:46.712 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.712 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2634010 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2634010 ']' 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.712 15:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.712 [2024-12-15 15:53:15.253648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:46.712 [2024-12-15 15:53:15.253706] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634010 ] 00:05:46.971 [2024-12-15 15:53:15.317467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.971 [2024-12-15 15:53:15.358095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.971 [2024-12-15 15:53:15.358182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.971 [2024-12-15 15:53:15.358267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.971 [2024-12-15 15:53:15.358269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:46.971 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.971 [2024-12-15 15:53:15.426993] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:46.971 [2024-12-15 15:53:15.427011] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.971 [2024-12-15 15:53:15.427021] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.971 [2024-12-15 15:53:15.427028] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.971 [2024-12-15 15:53:15.427035] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.971 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.971 [2024-12-15 15:53:15.495409] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.971 15:53:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.971 15:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.971 ************************************ 00:05:46.971 START TEST scheduler_create_thread 00:05:46.971 ************************************ 00:05:46.971 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:46.971 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.971 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.971 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 2 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 3 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 4 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 5 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 6 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 7 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 8 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 9 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.231 10 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.231 15:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.168 15:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.168 15:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.168 15:53:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.168 15:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.168 15:53:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.105 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.105 15:53:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.105 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.105 15:53:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.041 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.041 15:53:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.041 15:53:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.041 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.041 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.609 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.609 00:05:50.609 real 0m3.562s 00:05:50.609 user 0m0.025s 00:05:50.609 sys 0m0.007s 00:05:50.609 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.609 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.609 ************************************ 00:05:50.609 END TEST scheduler_create_thread 00:05:50.609 ************************************ 00:05:50.609 15:53:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.609 15:53:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2634010 00:05:50.609 15:53:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2634010 ']' 00:05:50.609 15:53:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2634010 00:05:50.609 15:53:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:50.609 15:53:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.609 15:53:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2634010 00:05:50.868 15:53:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:50.868 15:53:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:50.868 15:53:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2634010' 00:05:50.868 killing process with pid 2634010 00:05:50.868 15:53:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2634010 00:05:50.868 15:53:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2634010 00:05:51.128 [2024-12-15 15:53:19.479697] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:51.387 00:05:51.387 real 0m4.691s 00:05:51.387 user 0m8.464s 00:05:51.387 sys 0m0.430s 00:05:51.387 15:53:19 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.387 15:53:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 ************************************ 00:05:51.387 END TEST event_scheduler 00:05:51.387 ************************************ 00:05:51.387 15:53:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.387 15:53:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.387 15:53:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.387 15:53:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.387 15:53:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 ************************************ 00:05:51.387 START TEST app_repeat 00:05:51.387 ************************************ 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2634858 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2634858' 00:05:51.387 Process app_repeat pid: 2634858 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.387 spdk_app_start Round 0 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2634858 /var/tmp/spdk-nbd.sock 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2634858 ']' 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.387 15:53:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.387 15:53:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.387 [2024-12-15 15:53:19.840558] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:51.387 [2024-12-15 15:53:19.840613] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634858 ] 00:05:51.387 [2024-12-15 15:53:19.910972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.387 [2024-12-15 15:53:19.951592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.387 [2024-12-15 15:53:19.951595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.647 15:53:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.647 15:53:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.647 15:53:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.647 Malloc0 00:05:51.906 15:53:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.906 Malloc1 00:05:51.906 15:53:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.906 15:53:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.166 /dev/nbd0 00:05:52.166 15:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.166 15:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.166 1+0 records in 00:05:52.166 1+0 records out 00:05:52.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225204 s, 18.2 MB/s 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.166 15:53:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.166 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.166 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.166 15:53:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.425 /dev/nbd1 00:05:52.425 15:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.425 15:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.425 1+0 records in 00:05:52.425 1+0 records out 00:05:52.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247194 s, 16.6 MB/s 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.425 15:53:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.426 15:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.426 15:53:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.426 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.426 15:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.426 15:53:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.426 15:53:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.426 15:53:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.684 { 00:05:52.684 "nbd_device": "/dev/nbd0", 00:05:52.684 "bdev_name": "Malloc0" 00:05:52.684 }, 00:05:52.684 { 00:05:52.684 "nbd_device": "/dev/nbd1", 00:05:52.684 "bdev_name": "Malloc1" 00:05:52.684 } 00:05:52.684 ]' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.684 { 00:05:52.684 "nbd_device": "/dev/nbd0", 00:05:52.684 "bdev_name": "Malloc0" 00:05:52.684 }, 00:05:52.684 { 00:05:52.684 "nbd_device": "/dev/nbd1", 00:05:52.684 "bdev_name": "Malloc1" 00:05:52.684 } 00:05:52.684 ]' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.684 /dev/nbd1' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.684 /dev/nbd1' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.684 256+0 records in 00:05:52.684 256+0 records out 00:05:52.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116397 s, 90.1 MB/s 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.684 256+0 records in 00:05:52.684 256+0 records out 00:05:52.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190439 s, 55.1 MB/s 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.684 256+0 records in 00:05:52.684 256+0 records out 00:05:52.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199223 s, 52.6 MB/s 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.684 15:53:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.943 15:53:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.202 15:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.461 15:53:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.461 15:53:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.721 15:53:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.721 [2024-12-15 15:53:22.264539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.980 [2024-12-15 15:53:22.300622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.980 [2024-12-15 15:53:22.300625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.980 [2024-12-15 15:53:22.341120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.980 [2024-12-15 15:53:22.341162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.270 spdk_app_start Round 1 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2634858 /var/tmp/spdk-nbd.sock 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2634858 ']' 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.270 15:53:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.270 Malloc0 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.270 Malloc1 00:05:57.270 15:53:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.270 15:53:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.529 /dev/nbd0 00:05:57.529 15:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.529 15:53:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:57.529 15:53:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.530 1+0 records in 00:05:57.530 1+0 records out 00:05:57.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212613 s, 19.3 MB/s 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.530 15:53:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.530 15:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.530 15:53:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.530 15:53:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.789 /dev/nbd1 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.789 1+0 records in 00:05:57.789 1+0 records out 00:05:57.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244495 s, 16.8 MB/s 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.789 15:53:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.789 15:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.049 { 00:05:58.049 "nbd_device": "/dev/nbd0", 00:05:58.049 "bdev_name": "Malloc0" 00:05:58.049 }, 00:05:58.049 { 00:05:58.049 "nbd_device": "/dev/nbd1", 00:05:58.049 "bdev_name": "Malloc1" 00:05:58.049 } 00:05:58.049 ]' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.049 { 00:05:58.049 "nbd_device": "/dev/nbd0", 00:05:58.049 "bdev_name": "Malloc0" 00:05:58.049 }, 00:05:58.049 { 00:05:58.049 "nbd_device": "/dev/nbd1", 00:05:58.049 "bdev_name": "Malloc1" 00:05:58.049 } 00:05:58.049 ]' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.049 /dev/nbd1' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.049 /dev/nbd1' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.049 256+0 records in 00:05:58.049 256+0 records out 00:05:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010881 s, 96.4 MB/s 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.049 256+0 records in 00:05:58.049 256+0 records out 00:05:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195299 s, 53.7 MB/s 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.049 256+0 records in 00:05:58.049 256+0 records out 00:05:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019597 s, 53.5 MB/s 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.049 15:53:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.308 15:53:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.568 15:53:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.568 15:53:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.568 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.568 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.568 15:53:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.827 15:53:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.827 15:53:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.827 15:53:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.086 [2024-12-15 15:53:27.527513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.086 [2024-12-15 15:53:27.562461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.086 [2024-12-15 15:53:27.562464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.086 [2024-12-15 15:53:27.603822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.086 [2024-12-15 15:53:27.603862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.473 spdk_app_start Round 2 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2634858 /var/tmp/spdk-nbd.sock 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2634858 ']' 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.473 15:53:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.473 Malloc0 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.473 Malloc1 00:06:02.473 15:53:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.473 15:53:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.731 /dev/nbd0 00:06:02.731 15:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.731 15:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.731 1+0 records in 00:06:02.731 1+0 records out 00:06:02.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216424 s, 18.9 MB/s 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.731 15:53:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.731 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.731 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.731 15:53:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.990 /dev/nbd1 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.990 1+0 records in 00:06:02.990 1+0 records out 00:06:02.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222498 s, 18.4 MB/s 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.990 15:53:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.990 15:53:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.250 { 00:06:03.250 "nbd_device": "/dev/nbd0", 00:06:03.250 "bdev_name": "Malloc0" 00:06:03.250 }, 00:06:03.250 { 00:06:03.250 "nbd_device": "/dev/nbd1", 00:06:03.250 "bdev_name": "Malloc1" 00:06:03.250 } 00:06:03.250 ]' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.250 { 00:06:03.250 "nbd_device": "/dev/nbd0", 00:06:03.250 "bdev_name": "Malloc0" 00:06:03.250 }, 00:06:03.250 { 00:06:03.250 "nbd_device": "/dev/nbd1", 00:06:03.250 "bdev_name": "Malloc1" 00:06:03.250 } 00:06:03.250 ]' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.250 /dev/nbd1' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.250 /dev/nbd1' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.250 256+0 records in 00:06:03.250 256+0 records out 00:06:03.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470507 s, 223 MB/s 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.250 256+0 records in 00:06:03.250 256+0 records out 00:06:03.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153532 s, 68.3 MB/s 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.250 256+0 records in 00:06:03.250 256+0 records out 00:06:03.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200184 s, 52.4 MB/s 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.250 15:53:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.510 15:53:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.769 15:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.028 15:53:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.028 15:53:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.028 15:53:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.288 [2024-12-15 15:53:32.763896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.288 [2024-12-15 15:53:32.798259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.288 [2024-12-15 15:53:32.798261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.288 [2024-12-15 15:53:32.838081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.288 [2024-12-15 15:53:32.838121] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.579 15:53:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2634858 /var/tmp/spdk-nbd.sock 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2634858 ']' 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:07.579 15:53:35 event.app_repeat -- event/event.sh@39 -- # killprocess 2634858 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2634858 ']' 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2634858 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2634858 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2634858' 00:06:07.579 killing process with pid 2634858 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2634858 00:06:07.579 15:53:35 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2634858 00:06:07.579 spdk_app_start is called in Round 0. 00:06:07.579 Shutdown signal received, stop current app iteration 00:06:07.579 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:07.579 spdk_app_start is called in Round 1. 00:06:07.579 Shutdown signal received, stop current app iteration 00:06:07.579 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:07.579 spdk_app_start is called in Round 2. 00:06:07.579 Shutdown signal received, stop current app iteration 00:06:07.579 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:07.579 spdk_app_start is called in Round 3. 00:06:07.579 Shutdown signal received, stop current app iteration 00:06:07.579 15:53:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.579 15:53:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:07.579 00:06:07.579 real 0m16.194s 00:06:07.579 user 0m34.943s 00:06:07.579 sys 0m2.969s 00:06:07.579 15:53:36 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.579 15:53:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.579 ************************************ 00:06:07.579 END TEST app_repeat 00:06:07.579 ************************************ 00:06:07.579 15:53:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.579 15:53:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.579 15:53:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.579 15:53:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.579 15:53:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.579 ************************************ 00:06:07.579 START TEST cpu_locks 00:06:07.579 ************************************ 00:06:07.579 15:53:36 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.839 * Looking for test storage... 00:06:07.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.839 15:53:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.839 15:53:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.839 --rc genhtml_branch_coverage=1 00:06:07.839 --rc genhtml_function_coverage=1 00:06:07.839 --rc genhtml_legend=1 00:06:07.839 --rc geninfo_all_blocks=1 00:06:07.840 --rc geninfo_unexecuted_blocks=1 00:06:07.840 00:06:07.840 ' 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.840 --rc genhtml_branch_coverage=1 00:06:07.840 --rc genhtml_function_coverage=1 00:06:07.840 --rc genhtml_legend=1 00:06:07.840 --rc geninfo_all_blocks=1 00:06:07.840 --rc geninfo_unexecuted_blocks=1 00:06:07.840 00:06:07.840 ' 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.840 --rc genhtml_branch_coverage=1 00:06:07.840 --rc genhtml_function_coverage=1 00:06:07.840 --rc genhtml_legend=1 00:06:07.840 --rc geninfo_all_blocks=1 00:06:07.840 --rc geninfo_unexecuted_blocks=1 00:06:07.840 00:06:07.840 ' 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.840 --rc genhtml_branch_coverage=1 00:06:07.840 --rc genhtml_function_coverage=1 00:06:07.840 --rc genhtml_legend=1 00:06:07.840 --rc geninfo_all_blocks=1 00:06:07.840 --rc geninfo_unexecuted_blocks=1 00:06:07.840 00:06:07.840 ' 00:06:07.840 15:53:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.840 15:53:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.840 15:53:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.840 15:53:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.840 15:53:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 ************************************ 00:06:07.840 START TEST default_locks 00:06:07.840 ************************************ 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2637905 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2637905 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2637905 ']' 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.840 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 [2024-12-15 15:53:36.358578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.840 [2024-12-15 15:53:36.358625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637905 ] 00:06:08.100 [2024-12-15 15:53:36.430048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.100 [2024-12-15 15:53:36.468900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.100 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.100 15:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:08.100 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2637905 00:06:08.100 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2637905 00:06:08.100 15:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.668 lslocks: write error 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2637905 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2637905 ']' 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2637905 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.668 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637905 00:06:08.928 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.928 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.928 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637905' 00:06:08.928 killing process with pid 2637905 00:06:08.928 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2637905 00:06:08.928 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2637905 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2637905 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2637905 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2637905 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2637905 ']' 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2637905) - No such process 00:06:09.188 ERROR: process (pid: 2637905) is no longer running 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.188 00:06:09.188 real 0m1.263s 00:06:09.188 user 0m1.221s 00:06:09.188 sys 0m0.614s 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.188 15:53:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 ************************************ 00:06:09.188 END TEST default_locks 00:06:09.188 ************************************ 00:06:09.188 15:53:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.188 15:53:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.188 15:53:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.188 15:53:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 ************************************ 00:06:09.188 START TEST default_locks_via_rpc 00:06:09.188 ************************************ 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2638104 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2638104 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2638104 ']' 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.188 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 [2024-12-15 15:53:37.670195] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.189 [2024-12-15 15:53:37.670240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638104 ] 00:06:09.189 [2024-12-15 15:53:37.737387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.448 [2024-12-15 15:53:37.777286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.448 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.449 15:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.449 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2638104 00:06:09.449 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2638104 00:06:09.449 15:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2638104 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2638104 ']' 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2638104 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638104 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638104' 00:06:10.018 killing process with pid 2638104 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2638104 00:06:10.018 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2638104 00:06:10.277 00:06:10.277 real 0m1.209s 00:06:10.277 user 0m1.189s 00:06:10.277 sys 0m0.550s 00:06:10.277 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.277 15:53:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.277 ************************************ 00:06:10.277 END TEST default_locks_via_rpc 00:06:10.277 ************************************ 00:06:10.538 15:53:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.538 15:53:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.538 15:53:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.538 15:53:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.538 ************************************ 00:06:10.538 START TEST non_locking_app_on_locked_coremask 00:06:10.538 ************************************ 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2638368 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2638368 /var/tmp/spdk.sock 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2638368 ']' 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.538 15:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.538 [2024-12-15 15:53:38.972905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.538 [2024-12-15 15:53:38.972949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638368 ] 00:06:10.538 [2024-12-15 15:53:39.043178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.538 [2024-12-15 15:53:39.082368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2638449 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2638449 /var/tmp/spdk2.sock 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2638449 ']' 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.798 15:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.798 [2024-12-15 15:53:39.331891] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.798 [2024-12-15 15:53:39.331944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638449 ] 00:06:11.058 [2024-12-15 15:53:39.425085] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.058 [2024-12-15 15:53:39.425106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.058 [2024-12-15 15:53:39.503022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.627 15:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.627 15:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.627 15:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2638368 00:06:11.627 15:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2638368 00:06:11.627 15:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.566 lslocks: write error 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2638368 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2638368 ']' 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2638368 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.566 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638368 00:06:12.826 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.826 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.826 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638368' 00:06:12.827 killing process with pid 2638368 00:06:12.827 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2638368 00:06:12.827 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2638368 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2638449 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2638449 ']' 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2638449 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638449 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638449' 00:06:13.401 killing process with pid 2638449 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2638449 00:06:13.401 15:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2638449 00:06:13.662 00:06:13.662 real 0m3.238s 00:06:13.662 user 0m3.405s 00:06:13.662 sys 0m1.249s 00:06:13.662 15:53:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.662 15:53:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.662 ************************************ 00:06:13.662 END TEST non_locking_app_on_locked_coremask 00:06:13.662 ************************************ 00:06:13.662 15:53:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:13.662 15:53:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.662 15:53:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.662 15:53:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.922 ************************************ 00:06:13.922 START TEST locking_app_on_unlocked_coremask 00:06:13.922 ************************************ 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2638956 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2638956 /var/tmp/spdk.sock 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2638956 ']' 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.922 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.922 [2024-12-15 15:53:42.293228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.922 [2024-12-15 15:53:42.293275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638956 ] 00:06:13.922 [2024-12-15 15:53:42.362677] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.922 [2024-12-15 15:53:42.362706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.922 [2024-12-15 15:53:42.401588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2639146 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2639146 /var/tmp/spdk2.sock 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2639146 ']' 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.182 15:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.182 [2024-12-15 15:53:42.650277] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.182 [2024-12-15 15:53:42.650331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639146 ] 00:06:14.182 [2024-12-15 15:53:42.743717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.442 [2024-12-15 15:53:42.827454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.011 15:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.011 15:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.011 15:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2639146 00:06:15.011 15:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.011 15:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2639146 00:06:16.390 lslocks: write error 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2638956 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2638956 ']' 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2638956 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638956 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638956' 00:06:16.390 killing process with pid 2638956 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2638956 00:06:16.390 15:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2638956 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2639146 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2639146 ']' 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2639146 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2639146 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2639146' 00:06:16.959 killing process with pid 2639146 00:06:16.959 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2639146 00:06:16.960 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2639146 00:06:17.219 00:06:17.219 real 0m3.509s 00:06:17.219 user 0m3.680s 00:06:17.219 sys 0m1.308s 00:06:17.219 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.219 15:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.219 ************************************ 00:06:17.219 END TEST locking_app_on_unlocked_coremask 00:06:17.219 ************************************ 00:06:17.479 15:53:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.479 15:53:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.479 15:53:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.479 15:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.479 ************************************ 00:06:17.479 START TEST locking_app_on_locked_coremask 00:06:17.479 ************************************ 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2639759 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2639759 /var/tmp/spdk.sock 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2639759 ']' 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.479 15:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.479 [2024-12-15 15:53:45.883355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.479 [2024-12-15 15:53:45.883400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639759 ] 00:06:17.479 [2024-12-15 15:53:45.952424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.479 [2024-12-15 15:53:45.991233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2639772 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2639772 /var/tmp/spdk2.sock 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2639772 /var/tmp/spdk2.sock 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2639772 /var/tmp/spdk2.sock 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2639772 ']' 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.739 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.739 [2024-12-15 15:53:46.227582] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.739 [2024-12-15 15:53:46.227630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639772 ] 00:06:17.999 [2024-12-15 15:53:46.321326] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2639759 has claimed it. 00:06:17.999 [2024-12-15 15:53:46.321367] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.568 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2639772) - No such process 00:06:18.568 ERROR: process (pid: 2639772) is no longer running 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2639759 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2639759 00:06:18.568 15:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.829 lslocks: write error 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2639759 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2639759 ']' 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2639759 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2639759 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2639759' 00:06:18.829 killing process with pid 2639759 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2639759 00:06:18.829 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2639759 00:06:19.399 00:06:19.399 real 0m1.859s 00:06:19.399 user 0m1.991s 00:06:19.399 sys 0m0.674s 00:06:19.399 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.399 15:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.399 ************************************ 00:06:19.399 END TEST locking_app_on_locked_coremask 00:06:19.399 ************************************ 00:06:19.399 15:53:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.399 15:53:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.399 15:53:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.399 15:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.399 ************************************ 00:06:19.399 START TEST locking_overlapped_coremask 00:06:19.399 ************************************ 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2640064 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2640064 /var/tmp/spdk.sock 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2640064 ']' 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.399 15:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.399 [2024-12-15 15:53:47.823087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.399 [2024-12-15 15:53:47.823135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640064 ] 00:06:19.399 [2024-12-15 15:53:47.892213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.399 [2024-12-15 15:53:47.931135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.399 [2024-12-15 15:53:47.931228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.399 [2024-12-15 15:53:47.931228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2640078 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2640078 /var/tmp/spdk2.sock 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2640078 /var/tmp/spdk2.sock 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2640078 /var/tmp/spdk2.sock 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2640078 ']' 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.659 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.659 [2024-12-15 15:53:48.177924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.659 [2024-12-15 15:53:48.177971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640078 ] 00:06:19.919 [2024-12-15 15:53:48.277510] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2640064 has claimed it. 00:06:19.919 [2024-12-15 15:53:48.277552] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.488 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2640078) - No such process 00:06:20.488 ERROR: process (pid: 2640078) is no longer running 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2640064 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2640064 ']' 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2640064 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2640064 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2640064' 00:06:20.488 killing process with pid 2640064 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2640064 00:06:20.488 15:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2640064 00:06:20.749 00:06:20.749 real 0m1.433s 00:06:20.749 user 0m3.866s 00:06:20.749 sys 0m0.447s 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.749 ************************************ 00:06:20.749 END TEST locking_overlapped_coremask 00:06:20.749 ************************************ 00:06:20.749 15:53:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.749 15:53:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.749 15:53:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.749 15:53:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.749 ************************************ 00:06:20.749 START TEST locking_overlapped_coremask_via_rpc 00:06:20.749 ************************************ 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2640370 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2640370 /var/tmp/spdk.sock 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2640370 ']' 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.749 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.008 [2024-12-15 15:53:49.341619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.008 [2024-12-15 15:53:49.341668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640370 ] 00:06:21.008 [2024-12-15 15:53:49.410133] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.008 [2024-12-15 15:53:49.410158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.008 [2024-12-15 15:53:49.447703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.008 [2024-12-15 15:53:49.447725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.008 [2024-12-15 15:53:49.447727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2640380 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2640380 /var/tmp/spdk2.sock 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2640380 ']' 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.268 15:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.269 [2024-12-15 15:53:49.692669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.269 [2024-12-15 15:53:49.692734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640380 ] 00:06:21.269 [2024-12-15 15:53:49.791170] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.269 [2024-12-15 15:53:49.791201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.529 [2024-12-15 15:53:49.871520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.529 [2024-12-15 15:53:49.871639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.529 [2024-12-15 15:53:49.871641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.097 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.098 [2024-12-15 15:53:50.549759] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2640370 has claimed it. 00:06:22.098 request: 00:06:22.098 { 00:06:22.098 "method": "framework_enable_cpumask_locks", 00:06:22.098 "req_id": 1 00:06:22.098 } 00:06:22.098 Got JSON-RPC error response 00:06:22.098 response: 00:06:22.098 { 00:06:22.098 "code": -32603, 00:06:22.098 "message": "Failed to claim CPU core: 2" 00:06:22.098 } 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2640370 /var/tmp/spdk.sock 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2640370 ']' 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.098 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2640380 /var/tmp/spdk2.sock 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2640380 ']' 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.357 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.617 00:06:22.617 real 0m1.680s 00:06:22.617 user 0m0.788s 00:06:22.617 sys 0m0.161s 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.617 15:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.617 ************************************ 00:06:22.617 END TEST locking_overlapped_coremask_via_rpc 00:06:22.617 ************************************ 00:06:22.617 15:53:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.617 15:53:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2640370 ]] 00:06:22.617 15:53:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2640370 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2640370 ']' 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2640370 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2640370 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2640370' 00:06:22.617 killing process with pid 2640370 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2640370 00:06:22.617 15:53:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2640370 00:06:22.877 15:53:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2640380 ]] 00:06:22.877 15:53:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2640380 00:06:22.877 15:53:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2640380 ']' 00:06:22.877 15:53:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2640380 00:06:22.877 15:53:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.877 15:53:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.877 15:53:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2640380 00:06:23.136 15:53:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:23.136 15:53:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:23.136 15:53:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2640380' 00:06:23.136 killing process with pid 2640380 00:06:23.136 15:53:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2640380 00:06:23.136 15:53:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2640380 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2640370 ]] 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2640370 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2640370 ']' 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2640370 00:06:23.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2640370) - No such process 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2640370 is not found' 00:06:23.396 Process with pid 2640370 is not found 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2640380 ]] 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2640380 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2640380 ']' 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2640380 00:06:23.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2640380) - No such process 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2640380 is not found' 00:06:23.396 Process with pid 2640380 is not found 00:06:23.396 15:53:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.396 00:06:23.396 real 0m15.721s 00:06:23.396 user 0m26.045s 00:06:23.396 sys 0m6.103s 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.396 15:53:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.396 ************************************ 00:06:23.396 END TEST cpu_locks 00:06:23.396 ************************************ 00:06:23.396 00:06:23.396 real 0m40.834s 00:06:23.396 user 1m16.001s 00:06:23.396 sys 0m10.213s 00:06:23.396 15:53:51 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.396 15:53:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.396 ************************************ 00:06:23.396 END TEST event 00:06:23.396 ************************************ 00:06:23.396 15:53:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:23.396 15:53:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.396 15:53:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.396 15:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:23.396 ************************************ 00:06:23.396 START TEST thread 00:06:23.396 ************************************ 00:06:23.396 15:53:51 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:23.656 * Looking for test storage... 00:06:23.656 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.656 15:53:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.656 15:53:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.656 15:53:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.656 15:53:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.656 15:53:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.656 15:53:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.656 15:53:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.656 15:53:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.656 15:53:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.656 15:53:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.656 15:53:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.656 15:53:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.656 15:53:52 thread -- scripts/common.sh@345 -- # : 1 00:06:23.656 15:53:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.656 15:53:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.656 15:53:52 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.656 15:53:52 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.656 15:53:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.656 15:53:52 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.656 15:53:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.656 15:53:52 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.656 15:53:52 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.656 15:53:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.656 15:53:52 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.656 15:53:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.656 15:53:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.656 15:53:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.656 15:53:52 thread -- scripts/common.sh@368 -- # return 0 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.656 --rc genhtml_branch_coverage=1 00:06:23.656 --rc genhtml_function_coverage=1 00:06:23.656 --rc genhtml_legend=1 00:06:23.656 --rc geninfo_all_blocks=1 00:06:23.656 --rc geninfo_unexecuted_blocks=1 00:06:23.656 00:06:23.656 ' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.656 --rc genhtml_branch_coverage=1 00:06:23.656 --rc genhtml_function_coverage=1 00:06:23.656 --rc genhtml_legend=1 00:06:23.656 --rc geninfo_all_blocks=1 00:06:23.656 --rc geninfo_unexecuted_blocks=1 00:06:23.656 00:06:23.656 ' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.656 --rc genhtml_branch_coverage=1 00:06:23.656 --rc genhtml_function_coverage=1 00:06:23.656 --rc genhtml_legend=1 00:06:23.656 --rc geninfo_all_blocks=1 00:06:23.656 --rc geninfo_unexecuted_blocks=1 00:06:23.656 00:06:23.656 ' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.656 --rc genhtml_branch_coverage=1 00:06:23.656 --rc genhtml_function_coverage=1 00:06:23.656 --rc genhtml_legend=1 00:06:23.656 --rc geninfo_all_blocks=1 00:06:23.656 --rc geninfo_unexecuted_blocks=1 00:06:23.656 00:06:23.656 ' 00:06:23.656 15:53:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.656 15:53:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.656 ************************************ 00:06:23.656 START TEST thread_poller_perf 00:06:23.656 ************************************ 00:06:23.656 15:53:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.656 [2024-12-15 15:53:52.149922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.657 [2024-12-15 15:53:52.150002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641007 ] 00:06:23.657 [2024-12-15 15:53:52.220959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.916 [2024-12-15 15:53:52.259468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.916 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.854 [2024-12-15T14:53:53.424Z] ====================================== 00:06:24.854 [2024-12-15T14:53:53.424Z] busy:2508262218 (cyc) 00:06:24.854 [2024-12-15T14:53:53.424Z] total_run_count: 432000 00:06:24.854 [2024-12-15T14:53:53.424Z] tsc_hz: 2500000000 (cyc) 00:06:24.854 [2024-12-15T14:53:53.424Z] ====================================== 00:06:24.854 [2024-12-15T14:53:53.424Z] poller_cost: 5806 (cyc), 2322 (nsec) 00:06:24.854 00:06:24.854 real 0m1.196s 00:06:24.854 user 0m1.099s 00:06:24.854 sys 0m0.092s 00:06:24.854 15:53:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.854 15:53:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.854 ************************************ 00:06:24.854 END TEST thread_poller_perf 00:06:24.854 ************************************ 00:06:24.854 15:53:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.854 15:53:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.854 15:53:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.854 15:53:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.854 ************************************ 00:06:24.854 START TEST thread_poller_perf 00:06:24.854 ************************************ 00:06:24.854 15:53:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.113 [2024-12-15 15:53:53.425297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:25.113 [2024-12-15 15:53:53.425378] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641248 ] 00:06:25.113 [2024-12-15 15:53:53.497863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.113 [2024-12-15 15:53:53.535945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.113 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.050 [2024-12-15T14:53:54.620Z] ====================================== 00:06:26.050 [2024-12-15T14:53:54.620Z] busy:2501795758 (cyc) 00:06:26.050 [2024-12-15T14:53:54.620Z] total_run_count: 5604000 00:06:26.050 [2024-12-15T14:53:54.620Z] tsc_hz: 2500000000 (cyc) 00:06:26.050 [2024-12-15T14:53:54.620Z] ====================================== 00:06:26.050 [2024-12-15T14:53:54.620Z] poller_cost: 446 (cyc), 178 (nsec) 00:06:26.050 00:06:26.050 real 0m1.194s 00:06:26.050 user 0m1.099s 00:06:26.050 sys 0m0.091s 00:06:26.050 15:53:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.050 15:53:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.050 ************************************ 00:06:26.050 END TEST thread_poller_perf 00:06:26.050 ************************************ 00:06:26.309 15:53:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.309 00:06:26.309 real 0m2.737s 00:06:26.309 user 0m2.370s 00:06:26.309 sys 0m0.386s 00:06:26.309 15:53:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.309 15:53:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.309 ************************************ 00:06:26.309 END TEST thread 00:06:26.309 ************************************ 00:06:26.309 15:53:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.309 15:53:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.309 15:53:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.309 15:53:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.309 15:53:54 -- common/autotest_common.sh@10 -- # set +x 00:06:26.309 ************************************ 00:06:26.309 START TEST app_cmdline 00:06:26.309 ************************************ 00:06:26.309 15:53:54 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.309 * Looking for test storage... 00:06:26.309 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:26.309 15:53:54 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.309 15:53:54 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.309 15:53:54 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.569 15:53:54 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.569 15:53:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.569 15:53:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.569 15:53:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.569 15:53:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.570 15:53:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.570 --rc genhtml_branch_coverage=1 00:06:26.570 --rc genhtml_function_coverage=1 00:06:26.570 --rc genhtml_legend=1 00:06:26.570 --rc geninfo_all_blocks=1 00:06:26.570 --rc geninfo_unexecuted_blocks=1 00:06:26.570 00:06:26.570 ' 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.570 --rc genhtml_branch_coverage=1 00:06:26.570 --rc genhtml_function_coverage=1 00:06:26.570 --rc genhtml_legend=1 00:06:26.570 --rc geninfo_all_blocks=1 00:06:26.570 --rc geninfo_unexecuted_blocks=1 00:06:26.570 00:06:26.570 ' 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.570 --rc genhtml_branch_coverage=1 00:06:26.570 --rc genhtml_function_coverage=1 00:06:26.570 --rc genhtml_legend=1 00:06:26.570 --rc geninfo_all_blocks=1 00:06:26.570 --rc geninfo_unexecuted_blocks=1 00:06:26.570 00:06:26.570 ' 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.570 --rc genhtml_branch_coverage=1 00:06:26.570 --rc genhtml_function_coverage=1 00:06:26.570 --rc genhtml_legend=1 00:06:26.570 --rc geninfo_all_blocks=1 00:06:26.570 --rc geninfo_unexecuted_blocks=1 00:06:26.570 00:06:26.570 ' 00:06:26.570 15:53:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.570 15:53:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2641556 00:06:26.570 15:53:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2641556 00:06:26.570 15:53:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2641556 ']' 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.570 15:53:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.570 [2024-12-15 15:53:54.953223] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.570 [2024-12-15 15:53:54.953274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641556 ] 00:06:26.570 [2024-12-15 15:53:55.020116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.570 [2024-12-15 15:53:55.057812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.828 15:53:55 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.828 15:53:55 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:26.829 15:53:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:27.088 { 00:06:27.088 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:27.088 "fields": { 00:06:27.088 "major": 24, 00:06:27.088 "minor": 9, 00:06:27.088 "patch": 1, 00:06:27.088 "suffix": "-pre", 00:06:27.088 "commit": "b18e1bd62" 00:06:27.088 } 00:06:27.088 } 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.088 15:53:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:27.088 15:53:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:27.089 15:53:55 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.348 request: 00:06:27.348 { 00:06:27.348 "method": "env_dpdk_get_mem_stats", 00:06:27.348 "req_id": 1 00:06:27.348 } 00:06:27.348 Got JSON-RPC error response 00:06:27.348 response: 00:06:27.348 { 00:06:27.348 "code": -32601, 00:06:27.348 "message": "Method not found" 00:06:27.348 } 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.348 15:53:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2641556 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2641556 ']' 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2641556 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2641556 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2641556' 00:06:27.348 killing process with pid 2641556 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 2641556 00:06:27.348 15:53:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 2641556 00:06:27.608 00:06:27.608 real 0m1.348s 00:06:27.608 user 0m1.535s 00:06:27.608 sys 0m0.498s 00:06:27.608 15:53:56 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.608 15:53:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.608 ************************************ 00:06:27.608 END TEST app_cmdline 00:06:27.608 ************************************ 00:06:27.608 15:53:56 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:27.608 15:53:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.608 15:53:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.608 15:53:56 -- common/autotest_common.sh@10 -- # set +x 00:06:27.608 ************************************ 00:06:27.608 START TEST version 00:06:27.608 ************************************ 00:06:27.608 15:53:56 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:27.919 * Looking for test storage... 00:06:27.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:27.919 15:53:56 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.919 15:53:56 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.919 15:53:56 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.919 15:53:56 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.919 15:53:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.919 15:53:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.919 15:53:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.919 15:53:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.919 15:53:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.919 15:53:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.919 15:53:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.919 15:53:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.919 15:53:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.919 15:53:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.919 15:53:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.919 15:53:56 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.919 15:53:56 version -- scripts/common.sh@345 -- # : 1 00:06:27.919 15:53:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.919 15:53:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.919 15:53:56 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.919 15:53:56 version -- scripts/common.sh@353 -- # local d=1 00:06:27.919 15:53:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.919 15:53:56 version -- scripts/common.sh@355 -- # echo 1 00:06:27.919 15:53:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.919 15:53:56 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.919 15:53:56 version -- scripts/common.sh@353 -- # local d=2 00:06:27.919 15:53:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.919 15:53:56 version -- scripts/common.sh@355 -- # echo 2 00:06:27.919 15:53:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.919 15:53:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.919 15:53:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.920 15:53:56 version -- scripts/common.sh@368 -- # return 0 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.920 --rc genhtml_branch_coverage=1 00:06:27.920 --rc genhtml_function_coverage=1 00:06:27.920 --rc genhtml_legend=1 00:06:27.920 --rc geninfo_all_blocks=1 00:06:27.920 --rc geninfo_unexecuted_blocks=1 00:06:27.920 00:06:27.920 ' 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.920 --rc genhtml_branch_coverage=1 00:06:27.920 --rc genhtml_function_coverage=1 00:06:27.920 --rc genhtml_legend=1 00:06:27.920 --rc geninfo_all_blocks=1 00:06:27.920 --rc geninfo_unexecuted_blocks=1 00:06:27.920 00:06:27.920 ' 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.920 --rc genhtml_branch_coverage=1 00:06:27.920 --rc genhtml_function_coverage=1 00:06:27.920 --rc genhtml_legend=1 00:06:27.920 --rc geninfo_all_blocks=1 00:06:27.920 --rc geninfo_unexecuted_blocks=1 00:06:27.920 00:06:27.920 ' 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.920 --rc genhtml_branch_coverage=1 00:06:27.920 --rc genhtml_function_coverage=1 00:06:27.920 --rc genhtml_legend=1 00:06:27.920 --rc geninfo_all_blocks=1 00:06:27.920 --rc geninfo_unexecuted_blocks=1 00:06:27.920 00:06:27.920 ' 00:06:27.920 15:53:56 version -- app/version.sh@17 -- # get_header_version major 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # cut -f2 00:06:27.920 15:53:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.920 15:53:56 version -- app/version.sh@17 -- # major=24 00:06:27.920 15:53:56 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.920 15:53:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # cut -f2 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.920 15:53:56 version -- app/version.sh@18 -- # minor=9 00:06:27.920 15:53:56 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.920 15:53:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # cut -f2 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.920 15:53:56 version -- app/version.sh@19 -- # patch=1 00:06:27.920 15:53:56 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.920 15:53:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # cut -f2 00:06:27.920 15:53:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.920 15:53:56 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.920 15:53:56 version -- app/version.sh@22 -- # version=24.9 00:06:27.920 15:53:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.920 15:53:56 version -- app/version.sh@25 -- # version=24.9.1 00:06:27.920 15:53:56 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:27.920 15:53:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:27.920 15:53:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:27.920 15:53:56 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:27.920 15:53:56 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:27.920 00:06:27.920 real 0m0.275s 00:06:27.920 user 0m0.156s 00:06:27.920 sys 0m0.170s 00:06:27.920 15:53:56 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.920 15:53:56 version -- common/autotest_common.sh@10 -- # set +x 00:06:27.920 ************************************ 00:06:27.920 END TEST version 00:06:27.920 ************************************ 00:06:27.920 15:53:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:27.920 15:53:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:27.920 15:53:56 -- spdk/autotest.sh@194 -- # uname -s 00:06:27.920 15:53:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:27.920 15:53:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.920 15:53:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:27.920 15:53:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:27.920 15:53:56 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:27.920 15:53:56 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:27.920 15:53:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.920 15:53:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.198 15:53:56 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:28.198 15:53:56 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:28.198 15:53:56 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:28.198 15:53:56 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:28.198 15:53:56 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:06:28.198 15:53:56 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:28.198 15:53:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.198 15:53:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.198 15:53:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.198 ************************************ 00:06:28.198 START TEST nvmf_rdma 00:06:28.198 ************************************ 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:28.198 * Looking for test storage... 00:06:28.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.198 15:53:56 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.198 --rc genhtml_branch_coverage=1 00:06:28.198 --rc genhtml_function_coverage=1 00:06:28.198 --rc genhtml_legend=1 00:06:28.198 --rc geninfo_all_blocks=1 00:06:28.198 --rc geninfo_unexecuted_blocks=1 00:06:28.198 00:06:28.198 ' 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.198 --rc genhtml_branch_coverage=1 00:06:28.198 --rc genhtml_function_coverage=1 00:06:28.198 --rc genhtml_legend=1 00:06:28.198 --rc geninfo_all_blocks=1 00:06:28.198 --rc geninfo_unexecuted_blocks=1 00:06:28.198 00:06:28.198 ' 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.198 --rc genhtml_branch_coverage=1 00:06:28.198 --rc genhtml_function_coverage=1 00:06:28.198 --rc genhtml_legend=1 00:06:28.198 --rc geninfo_all_blocks=1 00:06:28.198 --rc geninfo_unexecuted_blocks=1 00:06:28.198 00:06:28.198 ' 00:06:28.198 15:53:56 nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.198 --rc genhtml_branch_coverage=1 00:06:28.198 --rc genhtml_function_coverage=1 00:06:28.198 --rc genhtml_legend=1 00:06:28.198 --rc geninfo_all_blocks=1 00:06:28.198 --rc geninfo_unexecuted_blocks=1 00:06:28.199 00:06:28.199 ' 00:06:28.199 15:53:56 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:28.199 15:53:56 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.199 15:53:56 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:28.199 15:53:56 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.199 15:53:56 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.199 15:53:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:28.199 ************************************ 00:06:28.199 START TEST nvmf_target_core 00:06:28.199 ************************************ 00:06:28.199 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:28.459 * Looking for test storage... 00:06:28.459 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.459 --rc genhtml_branch_coverage=1 00:06:28.459 --rc genhtml_function_coverage=1 00:06:28.459 --rc genhtml_legend=1 00:06:28.459 --rc geninfo_all_blocks=1 00:06:28.459 --rc geninfo_unexecuted_blocks=1 00:06:28.459 00:06:28.459 ' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.459 --rc genhtml_branch_coverage=1 00:06:28.459 --rc genhtml_function_coverage=1 00:06:28.459 --rc genhtml_legend=1 00:06:28.459 --rc geninfo_all_blocks=1 00:06:28.459 --rc geninfo_unexecuted_blocks=1 00:06:28.459 00:06:28.459 ' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.459 --rc genhtml_branch_coverage=1 00:06:28.459 --rc genhtml_function_coverage=1 00:06:28.459 --rc genhtml_legend=1 00:06:28.459 --rc geninfo_all_blocks=1 00:06:28.459 --rc geninfo_unexecuted_blocks=1 00:06:28.459 00:06:28.459 ' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.459 --rc genhtml_branch_coverage=1 00:06:28.459 --rc genhtml_function_coverage=1 00:06:28.459 --rc genhtml_legend=1 00:06:28.459 --rc geninfo_all_blocks=1 00:06:28.459 --rc geninfo_unexecuted_blocks=1 00:06:28.459 00:06:28.459 ' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.459 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.460 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.460 15:53:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.460 ************************************ 00:06:28.460 START TEST nvmf_abort 00:06:28.460 ************************************ 00:06:28.460 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:28.720 * Looking for test storage... 00:06:28.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:28.720 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.721 --rc genhtml_branch_coverage=1 00:06:28.721 --rc genhtml_function_coverage=1 00:06:28.721 --rc genhtml_legend=1 00:06:28.721 --rc geninfo_all_blocks=1 00:06:28.721 --rc geninfo_unexecuted_blocks=1 00:06:28.721 00:06:28.721 ' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.721 --rc genhtml_branch_coverage=1 00:06:28.721 --rc genhtml_function_coverage=1 00:06:28.721 --rc genhtml_legend=1 00:06:28.721 --rc geninfo_all_blocks=1 00:06:28.721 --rc geninfo_unexecuted_blocks=1 00:06:28.721 00:06:28.721 ' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.721 --rc genhtml_branch_coverage=1 00:06:28.721 --rc genhtml_function_coverage=1 00:06:28.721 --rc genhtml_legend=1 00:06:28.721 --rc geninfo_all_blocks=1 00:06:28.721 --rc geninfo_unexecuted_blocks=1 00:06:28.721 00:06:28.721 ' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.721 --rc genhtml_branch_coverage=1 00:06:28.721 --rc genhtml_function_coverage=1 00:06:28.721 --rc genhtml_legend=1 00:06:28.721 --rc geninfo_all_blocks=1 00:06:28.721 --rc geninfo_unexecuted_blocks=1 00:06:28.721 00:06:28.721 ' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.721 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:28.721 15:53:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:35.294 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:35.294 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:35.294 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:35.295 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:35.295 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # rdma_device_init 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@526 -- # allocate_nic_ips 00:06:35.295 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:35.555 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:35.555 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:35.555 altname enp217s0f0np0 00:06:35.555 altname ens818f0np0 00:06:35.555 inet 192.168.100.8/24 scope global mlx_0_0 00:06:35.555 valid_lft forever preferred_lft forever 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:35.555 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:35.556 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:35.556 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:35.556 altname enp217s0f1np1 00:06:35.556 altname ens818f1np1 00:06:35.556 inet 192.168.100.9/24 scope global mlx_0_1 00:06:35.556 valid_lft forever preferred_lft forever 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:35.556 15:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:06:35.556 192.168.100.9' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:06:35.556 192.168.100.9' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # head -n 1 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:06:35.556 192.168.100.9' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # tail -n +2 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # head -n 1 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2645562 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2645562 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2645562 ']' 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.556 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.556 [2024-12-15 15:54:04.093065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.556 [2024-12-15 15:54:04.093123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.816 [2024-12-15 15:54:04.166288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.816 [2024-12-15 15:54:04.206998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.816 [2024-12-15 15:54:04.207039] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.816 [2024-12-15 15:54:04.207049] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.816 [2024-12-15 15:54:04.207058] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.816 [2024-12-15 15:54:04.207065] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.816 [2024-12-15 15:54:04.207177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.816 [2024-12-15 15:54:04.207282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.816 [2024-12-15 15:54:04.207284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.816 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 [2024-12-15 15:54:04.399707] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b85c0/0x11bcab0) succeed. 00:06:36.076 [2024-12-15 15:54:04.419840] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b9b60/0x11fe150) succeed. 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 Malloc0 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 Delay0 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 [2024-12-15 15:54:04.579938] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.076 15:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:36.335 [2024-12-15 15:54:04.686421] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:38.243 Initializing NVMe Controllers 00:06:38.243 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:38.243 controller IO queue size 128 less than required 00:06:38.243 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:38.243 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:38.243 Initialization complete. Launching workers. 00:06:38.243 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42889 00:06:38.243 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42950, failed to submit 62 00:06:38.243 success 42890, unsuccessful 60, failed 0 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.243 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:38.243 rmmod nvme_rdma 00:06:38.502 rmmod nvme_fabrics 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2645562 ']' 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2645562 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2645562 ']' 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2645562 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2645562 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2645562' 00:06:38.503 killing process with pid 2645562 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2645562 00:06:38.503 15:54:06 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2645562 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:06:38.762 00:06:38.762 real 0m10.166s 00:06:38.762 user 0m12.894s 00:06:38.762 sys 0m5.645s 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 ************************************ 00:06:38.762 END TEST nvmf_abort 00:06:38.762 ************************************ 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 ************************************ 00:06:38.762 START TEST nvmf_ns_hotplug_stress 00:06:38.762 ************************************ 00:06:38.762 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:38.762 * Looking for test storage... 00:06:39.022 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.022 --rc genhtml_branch_coverage=1 00:06:39.022 --rc genhtml_function_coverage=1 00:06:39.022 --rc genhtml_legend=1 00:06:39.022 --rc geninfo_all_blocks=1 00:06:39.022 --rc geninfo_unexecuted_blocks=1 00:06:39.022 00:06:39.022 ' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.022 --rc genhtml_branch_coverage=1 00:06:39.022 --rc genhtml_function_coverage=1 00:06:39.022 --rc genhtml_legend=1 00:06:39.022 --rc geninfo_all_blocks=1 00:06:39.022 --rc geninfo_unexecuted_blocks=1 00:06:39.022 00:06:39.022 ' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.022 --rc genhtml_branch_coverage=1 00:06:39.022 --rc genhtml_function_coverage=1 00:06:39.022 --rc genhtml_legend=1 00:06:39.022 --rc geninfo_all_blocks=1 00:06:39.022 --rc geninfo_unexecuted_blocks=1 00:06:39.022 00:06:39.022 ' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.022 --rc genhtml_branch_coverage=1 00:06:39.022 --rc genhtml_function_coverage=1 00:06:39.022 --rc genhtml_legend=1 00:06:39.022 --rc geninfo_all_blocks=1 00:06:39.022 --rc geninfo_unexecuted_blocks=1 00:06:39.022 00:06:39.022 ' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.022 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.023 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:39.023 15:54:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:45.593 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:45.593 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:45.593 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:45.594 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:45.594 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # rdma_device_init 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@526 -- # allocate_nic_ips 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:45.594 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.594 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:45.594 altname enp217s0f0np0 00:06:45.594 altname ens818f0np0 00:06:45.594 inet 192.168.100.8/24 scope global mlx_0_0 00:06:45.594 valid_lft forever preferred_lft forever 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:45.594 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.594 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:45.594 altname enp217s0f1np1 00:06:45.594 altname ens818f1np1 00:06:45.594 inet 192.168.100.9/24 scope global mlx_0_1 00:06:45.594 valid_lft forever preferred_lft forever 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:45.594 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:06:45.595 192.168.100.9' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # head -n 1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:06:45.595 192.168.100.9' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:06:45.595 192.168.100.9' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # tail -n +2 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # head -n 1 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2649797 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2649797 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2649797 ']' 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.595 15:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.595 [2024-12-15 15:54:13.949732] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.595 [2024-12-15 15:54:13.949789] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.595 [2024-12-15 15:54:14.019775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.595 [2024-12-15 15:54:14.058234] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.595 [2024-12-15 15:54:14.058275] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.595 [2024-12-15 15:54:14.058285] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.595 [2024-12-15 15:54:14.058293] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.595 [2024-12-15 15:54:14.058300] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.595 [2024-12-15 15:54:14.058343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.595 [2024-12-15 15:54:14.058430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.595 [2024-12-15 15:54:14.058432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.595 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.595 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:45.595 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:45.595 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.595 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.854 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.854 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:45.854 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:45.854 [2024-12-15 15:54:14.394146] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11f85c0/0x11fcab0) succeed. 00:06:45.854 [2024-12-15 15:54:14.405156] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11f9b60/0x123e150) succeed. 00:06:46.114 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.373 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:46.373 [2024-12-15 15:54:14.881246] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:46.373 15:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:46.634 15:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:46.894 Malloc0 00:06:46.894 15:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:47.153 Delay0 00:06:47.153 15:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.153 15:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:47.412 NULL1 00:06:47.412 15:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:47.671 15:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:47.671 15:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2650102 00:06:47.671 15:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:47.671 15:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.049 Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 15:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.049 15:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:49.049 15:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:49.308 true 00:06:49.308 15:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:49.308 15:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.245 15:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.246 15:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:50.246 15:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:50.504 true 00:06:50.504 15:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:50.504 15:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 15:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.442 15:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:51.442 15:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:51.701 true 00:06:51.701 15:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:51.701 15:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 15:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.637 15:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:52.637 15:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:52.898 true 00:06:52.898 15:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:52.898 15:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 15:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.835 15:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:53.835 15:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:54.094 true 00:06:54.094 15:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:54.094 15:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.032 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.032 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:55.032 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:55.291 true 00:06:55.291 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:55.291 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.550 15:54:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.550 15:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:55.550 15:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:55.809 true 00:06:55.809 15:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:55.809 15:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 15:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.185 15:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:57.185 15:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:57.185 true 00:06:57.185 15:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:57.185 15:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 15:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.381 15:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:58.381 15:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:58.381 true 00:06:58.639 15:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:58.639 15:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.207 15:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.466 15:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:59.466 15:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:59.725 true 00:06:59.725 15:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:06:59.725 15:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 15:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.663 15:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:00.663 15:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:00.925 true 00:07:00.925 15:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:00.925 15:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 15:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.922 15:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:01.922 15:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.181 true 00:07:02.181 15:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:02.181 15:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.118 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.118 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.118 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:03.377 true 00:07:03.377 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:03.377 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.636 15:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.636 15:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:03.636 15:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:03.895 true 00:07:03.895 15:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:03.895 15:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 15:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.274 15:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:05.274 15:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:05.274 true 00:07:05.274 15:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:05.274 15:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.212 15:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.471 15:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:06.471 15:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:06.729 true 00:07:06.729 15:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:06.729 15:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 15:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.556 15:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:07.556 15:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:07.815 true 00:07:07.815 15:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:07.815 15:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 15:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.752 15:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:08.752 15:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:09.011 true 00:07:09.011 15:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:09.011 15:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 15:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.947 15:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:09.947 15:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:10.304 true 00:07:10.304 15:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:10.304 15:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.159 15:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.159 15:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:11.159 15:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:11.418 true 00:07:11.418 15:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:11.418 15:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.677 15:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.937 15:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:11.937 15:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:11.937 true 00:07:11.937 15:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:11.937 15:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 15:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.316 15:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:13.316 15:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:13.575 true 00:07:13.575 15:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:13.575 15:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 15:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.511 15:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:14.511 15:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:14.770 true 00:07:14.770 15:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:14.770 15:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 15:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.706 15:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:15.706 15:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:15.965 true 00:07:15.965 15:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:15.965 15:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 15:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.902 15:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:16.902 15:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:17.161 true 00:07:17.161 15:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:17.161 15:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.097 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.097 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:18.097 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:18.357 true 00:07:18.357 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:18.357 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.617 15:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.617 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:18.617 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:18.876 true 00:07:18.876 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:18.876 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.135 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.394 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:19.394 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:19.394 true 00:07:19.394 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:19.394 15:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.654 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.913 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:19.913 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:19.913 Initializing NVMe Controllers 00:07:19.913 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.913 Controller IO queue size 128, less than required. 00:07:19.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.913 Controller IO queue size 128, less than required. 00:07:19.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:19.913 Initialization complete. Launching workers. 00:07:19.913 ======================================================== 00:07:19.913 Latency(us) 00:07:19.913 Device Information : IOPS MiB/s Average min max 00:07:19.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5225.87 2.55 22153.23 821.43 1007091.53 00:07:19.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35828.20 17.49 3572.38 1673.18 286666.27 00:07:19.913 ======================================================== 00:07:19.913 Total : 41054.07 20.05 5937.58 821.43 1007091.53 00:07:19.913 00:07:20.172 true 00:07:20.172 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2650102 00:07:20.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2650102) - No such process 00:07:20.172 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2650102 00:07:20.173 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.173 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.432 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:20.432 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:20.432 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:20.432 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.432 15:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:20.691 null0 00:07:20.691 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.691 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.691 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:20.950 null1 00:07:20.950 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.950 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.950 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:20.950 null2 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:21.209 null3 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.209 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:21.468 null4 00:07:21.468 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.468 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.468 15:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:21.727 null5 00:07:21.727 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.727 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.727 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:21.727 null6 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:21.987 null7 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:21.987 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2656230 2656232 2656235 2656238 2656242 2656245 2656248 2656251 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.988 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.247 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.248 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.507 15:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.767 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.027 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.286 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.287 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.546 15:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.805 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.806 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.065 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.325 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.585 15:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.844 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.845 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.104 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.364 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.624 15:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.624 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.883 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.884 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.884 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:26.143 rmmod nvme_rdma 00:07:26.143 rmmod nvme_fabrics 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2649797 ']' 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2649797 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2649797 ']' 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2649797 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.143 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2649797 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2649797' 00:07:26.403 killing process with pid 2649797 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2649797 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2649797 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:26.403 00:07:26.403 real 0m47.724s 00:07:26.403 user 3m19.044s 00:07:26.403 sys 0m13.882s 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.403 15:54:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.403 ************************************ 00:07:26.403 END TEST nvmf_ns_hotplug_stress 00:07:26.403 ************************************ 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.666 ************************************ 00:07:26.666 START TEST nvmf_delete_subsystem 00:07:26.666 ************************************ 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:26.666 * Looking for test storage... 00:07:26.666 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.666 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.667 --rc genhtml_branch_coverage=1 00:07:26.667 --rc genhtml_function_coverage=1 00:07:26.667 --rc genhtml_legend=1 00:07:26.667 --rc geninfo_all_blocks=1 00:07:26.667 --rc geninfo_unexecuted_blocks=1 00:07:26.667 00:07:26.667 ' 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.667 --rc genhtml_branch_coverage=1 00:07:26.667 --rc genhtml_function_coverage=1 00:07:26.667 --rc genhtml_legend=1 00:07:26.667 --rc geninfo_all_blocks=1 00:07:26.667 --rc geninfo_unexecuted_blocks=1 00:07:26.667 00:07:26.667 ' 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.667 --rc genhtml_branch_coverage=1 00:07:26.667 --rc genhtml_function_coverage=1 00:07:26.667 --rc genhtml_legend=1 00:07:26.667 --rc geninfo_all_blocks=1 00:07:26.667 --rc geninfo_unexecuted_blocks=1 00:07:26.667 00:07:26.667 ' 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.667 --rc genhtml_branch_coverage=1 00:07:26.667 --rc genhtml_function_coverage=1 00:07:26.667 --rc genhtml_legend=1 00:07:26.667 --rc geninfo_all_blocks=1 00:07:26.667 --rc geninfo_unexecuted_blocks=1 00:07:26.667 00:07:26.667 ' 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.667 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.927 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.927 15:54:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:33.500 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:33.500 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:33.500 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:33.501 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:33.501 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # rdma_device_init 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@526 -- # allocate_nic_ips 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:33.501 15:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:33.501 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.501 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:33.501 altname enp217s0f0np0 00:07:33.501 altname ens818f0np0 00:07:33.501 inet 192.168.100.8/24 scope global mlx_0_0 00:07:33.501 valid_lft forever preferred_lft forever 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:33.501 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.501 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:33.501 altname enp217s0f1np1 00:07:33.501 altname ens818f1np1 00:07:33.501 inet 192.168.100.9/24 scope global mlx_0_1 00:07:33.501 valid_lft forever preferred_lft forever 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:33.501 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:33.502 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:33.502 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.502 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.502 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:33.502 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:33.761 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:33.761 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:33.761 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:07:33.762 192.168.100.9' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:07:33.762 192.168.100.9' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # head -n 1 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:07:33.762 192.168.100.9' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # tail -n +2 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # head -n 1 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2660454 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2660454 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2660454 ']' 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.762 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.762 [2024-12-15 15:55:02.184202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.762 [2024-12-15 15:55:02.184254] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.762 [2024-12-15 15:55:02.254526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.762 [2024-12-15 15:55:02.292920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.762 [2024-12-15 15:55:02.292959] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.762 [2024-12-15 15:55:02.292968] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.762 [2024-12-15 15:55:02.292977] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.762 [2024-12-15 15:55:02.292999] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.762 [2024-12-15 15:55:02.293047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.762 [2024-12-15 15:55:02.293049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 [2024-12-15 15:55:02.457117] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x157e720/0x1582c10) succeed. 00:07:34.022 [2024-12-15 15:55:02.467059] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x157fc20/0x15c42b0) succeed. 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 [2024-12-15 15:55:02.550620] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 NULL1 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.022 Delay0 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.022 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.281 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.281 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2660511 00:07:34.281 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:34.281 15:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:34.281 [2024-12-15 15:55:02.664753] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:36.188 15:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.188 15:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.188 15:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 NVMe io qpair process completion error 00:07:37.567 15:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.567 15:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:37.567 15:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2660511 00:07:37.567 15:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:37.826 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:37.826 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2660511 00:07:37.826 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Write completed with error (sct=0, sc=8) 00:07:38.395 starting I/O failed: -6 00:07:38.395 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 starting I/O failed: -6 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Write completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.396 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Read completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Write completed with error (sct=0, sc=8) 00:07:38.397 Initializing NVMe Controllers 00:07:38.397 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:38.397 Controller IO queue size 128, less than required. 00:07:38.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.397 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:38.397 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:38.397 Initialization complete. Launching workers. 00:07:38.397 ======================================================== 00:07:38.397 Latency(us) 00:07:38.397 Device Information : IOPS MiB/s Average min max 00:07:38.397 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.48 0.04 1593705.00 1000211.67 2975659.21 00:07:38.397 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.48 0.04 1595246.09 1001259.29 2977355.28 00:07:38.397 ======================================================== 00:07:38.397 Total : 160.97 0.08 1594475.54 1000211.67 2977355.28 00:07:38.397 00:07:38.397 [2024-12-15 15:55:06.750694] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:38.397 [2024-12-15 15:55:06.765041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:38.397 [2024-12-15 15:55:06.765062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:07:38.397 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:38.397 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:38.397 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2660511 00:07:38.397 15:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2660511 00:07:38.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2660511) - No such process 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2660511 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2660511 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.966 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2660511 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.967 [2024-12-15 15:55:07.290625] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2661347 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:38.967 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.967 [2024-12-15 15:55:07.379934] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:39.535 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.535 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:39.535 15:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.794 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.794 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:39.795 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.361 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.361 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:40.361 15:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.928 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.928 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:40.929 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.496 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.496 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:41.496 15:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.064 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.064 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:42.064 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.323 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.323 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:42.323 15:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.889 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.889 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:42.889 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.456 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.456 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:43.456 15:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.023 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.023 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:44.023 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.590 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.591 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:44.591 15:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.849 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.850 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:44.850 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.417 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.418 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:45.418 15:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.984 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.984 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:45.984 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.984 Initializing NVMe Controllers 00:07:45.984 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.984 Controller IO queue size 128, less than required. 00:07:45.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.984 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.984 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.984 Initialization complete. Launching workers. 00:07:45.984 ======================================================== 00:07:45.984 Latency(us) 00:07:45.984 Device Information : IOPS MiB/s Average min max 00:07:45.984 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001402.52 1000055.25 1004306.42 00:07:45.984 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002965.38 1000746.81 1005690.57 00:07:45.984 ======================================================== 00:07:45.984 Total : 256.00 0.12 1002183.95 1000055.25 1005690.57 00:07:45.984 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2661347 00:07:46.551 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2661347) - No such process 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2661347 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:46.551 rmmod nvme_rdma 00:07:46.551 rmmod nvme_fabrics 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2660454 ']' 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2660454 00:07:46.551 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2660454 ']' 00:07:46.552 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2660454 00:07:46.552 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:46.552 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.552 15:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2660454 00:07:46.552 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.552 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.552 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2660454' 00:07:46.552 killing process with pid 2660454 00:07:46.552 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2660454 00:07:46.552 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2660454 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:46.811 00:07:46.811 real 0m20.203s 00:07:46.811 user 0m49.055s 00:07:46.811 sys 0m6.437s 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.811 ************************************ 00:07:46.811 END TEST nvmf_delete_subsystem 00:07:46.811 ************************************ 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.811 ************************************ 00:07:46.811 START TEST nvmf_host_management 00:07:46.811 ************************************ 00:07:46.811 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:47.071 * Looking for test storage... 00:07:47.071 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.071 --rc genhtml_branch_coverage=1 00:07:47.071 --rc genhtml_function_coverage=1 00:07:47.071 --rc genhtml_legend=1 00:07:47.071 --rc geninfo_all_blocks=1 00:07:47.071 --rc geninfo_unexecuted_blocks=1 00:07:47.071 00:07:47.071 ' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.071 --rc genhtml_branch_coverage=1 00:07:47.071 --rc genhtml_function_coverage=1 00:07:47.071 --rc genhtml_legend=1 00:07:47.071 --rc geninfo_all_blocks=1 00:07:47.071 --rc geninfo_unexecuted_blocks=1 00:07:47.071 00:07:47.071 ' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.071 --rc genhtml_branch_coverage=1 00:07:47.071 --rc genhtml_function_coverage=1 00:07:47.071 --rc genhtml_legend=1 00:07:47.071 --rc geninfo_all_blocks=1 00:07:47.071 --rc geninfo_unexecuted_blocks=1 00:07:47.071 00:07:47.071 ' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.071 --rc genhtml_branch_coverage=1 00:07:47.071 --rc genhtml_function_coverage=1 00:07:47.071 --rc genhtml_legend=1 00:07:47.071 --rc geninfo_all_blocks=1 00:07:47.071 --rc geninfo_unexecuted_blocks=1 00:07:47.071 00:07:47.071 ' 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.071 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.072 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.072 15:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.199 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:55.200 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:55.200 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:55.200 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:55.200 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # rdma_device_init 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@526 -- # allocate_nic_ips 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:55.200 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:55.201 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.201 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:55.201 altname enp217s0f0np0 00:07:55.201 altname ens818f0np0 00:07:55.201 inet 192.168.100.8/24 scope global mlx_0_0 00:07:55.201 valid_lft forever preferred_lft forever 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:55.201 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.201 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:55.201 altname enp217s0f1np1 00:07:55.201 altname ens818f1np1 00:07:55.201 inet 192.168.100.9/24 scope global mlx_0_1 00:07:55.201 valid_lft forever preferred_lft forever 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:07:55.201 192.168.100.9' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:07:55.201 192.168.100.9' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # head -n 1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:07:55.201 192.168.100.9' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # tail -n +2 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # head -n 1 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2666167 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2666167 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2666167 ']' 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.201 15:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 [2024-12-15 15:55:22.851615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.201 [2024-12-15 15:55:22.851666] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.201 [2024-12-15 15:55:22.921405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.201 [2024-12-15 15:55:22.960355] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.201 [2024-12-15 15:55:22.960398] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.201 [2024-12-15 15:55:22.960407] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.201 [2024-12-15 15:55:22.960415] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.201 [2024-12-15 15:55:22.960439] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.201 [2024-12-15 15:55:22.960542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.201 [2024-12-15 15:55:22.960617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.201 [2024-12-15 15:55:22.960719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.201 [2024-12-15 15:55:22.960720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 [2024-12-15 15:55:23.144764] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dfa140/0x1dfe630) succeed. 00:07:55.201 [2024-12-15 15:55:23.156527] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dfb780/0x1e3fcd0) succeed. 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 Malloc0 00:07:55.201 [2024-12-15 15:55:23.339092] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2666367 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2666367 /var/tmp/bdevperf.sock 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2666367 ']' 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:55.201 { 00:07:55.201 "params": { 00:07:55.201 "name": "Nvme$subsystem", 00:07:55.201 "trtype": "$TEST_TRANSPORT", 00:07:55.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.201 "adrfam": "ipv4", 00:07:55.201 "trsvcid": "$NVMF_PORT", 00:07:55.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.201 "hdgst": ${hdgst:-false}, 00:07:55.201 "ddgst": ${ddgst:-false} 00:07:55.201 }, 00:07:55.201 "method": "bdev_nvme_attach_controller" 00:07:55.201 } 00:07:55.201 EOF 00:07:55.201 )") 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:55.201 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:55.201 "params": { 00:07:55.201 "name": "Nvme0", 00:07:55.201 "trtype": "rdma", 00:07:55.201 "traddr": "192.168.100.8", 00:07:55.201 "adrfam": "ipv4", 00:07:55.201 "trsvcid": "4420", 00:07:55.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.201 "hdgst": false, 00:07:55.201 "ddgst": false 00:07:55.201 }, 00:07:55.201 "method": "bdev_nvme_attach_controller" 00:07:55.201 }' 00:07:55.201 [2024-12-15 15:55:23.442187] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.202 [2024-12-15 15:55:23.442238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666367 ] 00:07:55.202 [2024-12-15 15:55:23.515205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.202 [2024-12-15 15:55:23.553748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.202 Running I/O for 10 seconds... 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=171 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 171 -ge 100 ']' 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.461 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.462 15:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:56.401 296.00 IOPS, 18.50 MiB/s [2024-12-15T14:55:24.971Z] [2024-12-15 15:55:24.856132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.401 [2024-12-15 15:55:24.856493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x181f00 00:07:56.401 [2024-12-15 15:55:24.856502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x181f00 00:07:56.402 [2024-12-15 15:55:24.856738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x181e00 00:07:56.402 [2024-12-15 15:55:24.856929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.856949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.856968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.856987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.856997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5e7000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5c6000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a5000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d563000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d542000 len:0x10000 key:0x182a00 00:07:56.402 [2024-12-15 15:55:24.857198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.402 [2024-12-15 15:55:24.857208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.857380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d818000 len:0x10000 key:0x182a00 00:07:56.403 [2024-12-15 15:55:24.857389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d5cd5000 sqhd:7250 p:0 m:0 dnr:0 00:07:56.403 [2024-12-15 15:55:24.859403] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae5080 was disconnected and freed. reset controller. 00:07:56.403 [2024-12-15 15:55:24.860282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:56.403 task offset: 40960 on job bdev=Nvme0n1 fails 00:07:56.403 00:07:56.403 Latency(us) 00:07:56.403 [2024-12-15T14:55:24.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.403 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.403 Job: Nvme0n1 ended in about 1.12 seconds with error 00:07:56.403 Verification LBA range: start 0x0 length 0x400 00:07:56.403 Nvme0n1 : 1.12 263.42 16.46 56.95 0.00 198105.97 2202.01 1020054.73 00:07:56.403 [2024-12-15T14:55:24.973Z] =================================================================================================================== 00:07:56.403 [2024-12-15T14:55:24.973Z] Total : 263.42 16.46 56.95 0.00 198105.97 2202.01 1020054.73 00:07:56.403 [2024-12-15 15:55:24.862751] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2666367 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:56.403 { 00:07:56.403 "params": { 00:07:56.403 "name": "Nvme$subsystem", 00:07:56.403 "trtype": "$TEST_TRANSPORT", 00:07:56.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.403 "adrfam": "ipv4", 00:07:56.403 "trsvcid": "$NVMF_PORT", 00:07:56.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.403 "hdgst": ${hdgst:-false}, 00:07:56.403 "ddgst": ${ddgst:-false} 00:07:56.403 }, 00:07:56.403 "method": "bdev_nvme_attach_controller" 00:07:56.403 } 00:07:56.403 EOF 00:07:56.403 )") 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:56.403 15:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:56.403 "params": { 00:07:56.403 "name": "Nvme0", 00:07:56.403 "trtype": "rdma", 00:07:56.403 "traddr": "192.168.100.8", 00:07:56.403 "adrfam": "ipv4", 00:07:56.403 "trsvcid": "4420", 00:07:56.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:56.403 "hdgst": false, 00:07:56.403 "ddgst": false 00:07:56.403 }, 00:07:56.403 "method": "bdev_nvme_attach_controller" 00:07:56.403 }' 00:07:56.403 [2024-12-15 15:55:24.922191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.403 [2024-12-15 15:55:24.922243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666643 ] 00:07:56.661 [2024-12-15 15:55:24.992452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.661 [2024-12-15 15:55:25.030924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.661 Running I/O for 1 seconds... 00:07:58.048 3113.00 IOPS, 194.56 MiB/s 00:07:58.048 Latency(us) 00:07:58.048 [2024-12-15T14:55:26.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.048 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.048 Verification LBA range: start 0x0 length 0x400 00:07:58.048 Nvme0n1 : 1.01 3143.09 196.44 0.00 0.00 19947.45 638.98 39636.17 00:07:58.048 [2024-12-15T14:55:26.618Z] =================================================================================================================== 00:07:58.048 [2024-12-15T14:55:26.618Z] Total : 3143.09 196.44 0.00 0.00 19947.45 638.98 39636.17 00:07:58.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2666367 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:58.048 rmmod nvme_rdma 00:07:58.048 rmmod nvme_fabrics 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2666167 ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2666167 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2666167 ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2666167 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2666167 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2666167' 00:07:58.048 killing process with pid 2666167 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2666167 00:07:58.048 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2666167 00:07:58.307 [2024-12-15 15:55:26.804092] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:58.307 00:07:58.307 real 0m11.519s 00:07:58.307 user 0m20.139s 00:07:58.307 sys 0m6.512s 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.307 ************************************ 00:07:58.307 END TEST nvmf_host_management 00:07:58.307 ************************************ 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.307 15:55:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.567 ************************************ 00:07:58.567 START TEST nvmf_lvol 00:07:58.567 ************************************ 00:07:58.567 15:55:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:58.567 * Looking for test storage... 00:07:58.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.567 --rc genhtml_branch_coverage=1 00:07:58.567 --rc genhtml_function_coverage=1 00:07:58.567 --rc genhtml_legend=1 00:07:58.567 --rc geninfo_all_blocks=1 00:07:58.567 --rc geninfo_unexecuted_blocks=1 00:07:58.567 00:07:58.567 ' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.567 --rc genhtml_branch_coverage=1 00:07:58.567 --rc genhtml_function_coverage=1 00:07:58.567 --rc genhtml_legend=1 00:07:58.567 --rc geninfo_all_blocks=1 00:07:58.567 --rc geninfo_unexecuted_blocks=1 00:07:58.567 00:07:58.567 ' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.567 --rc genhtml_branch_coverage=1 00:07:58.567 --rc genhtml_function_coverage=1 00:07:58.567 --rc genhtml_legend=1 00:07:58.567 --rc geninfo_all_blocks=1 00:07:58.567 --rc geninfo_unexecuted_blocks=1 00:07:58.567 00:07:58.567 ' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.567 --rc genhtml_branch_coverage=1 00:07:58.567 --rc genhtml_function_coverage=1 00:07:58.567 --rc genhtml_legend=1 00:07:58.567 --rc geninfo_all_blocks=1 00:07:58.567 --rc geninfo_unexecuted_blocks=1 00:07:58.567 00:07:58.567 ' 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.567 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.827 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.827 15:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:05.400 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:05.400 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:05.400 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:05.400 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # rdma_device_init 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@526 -- # allocate_nic_ips 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:05.400 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:05.400 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.400 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:05.401 altname enp217s0f0np0 00:08:05.401 altname ens818f0np0 00:08:05.401 inet 192.168.100.8/24 scope global mlx_0_0 00:08:05.401 valid_lft forever preferred_lft forever 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:05.401 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.401 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:05.401 altname enp217s0f1np1 00:08:05.401 altname ens818f1np1 00:08:05.401 inet 192.168.100.9/24 scope global mlx_0_1 00:08:05.401 valid_lft forever preferred_lft forever 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.401 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:08:05.661 192.168.100.9' 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:08:05.661 192.168.100.9' 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # head -n 1 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:08:05.661 192.168.100.9' 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # tail -n +2 00:08:05.661 15:55:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # head -n 1 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.661 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2670351 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2670351 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2670351 ']' 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.662 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 [2024-12-15 15:55:34.089990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.662 [2024-12-15 15:55:34.090037] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.662 [2024-12-15 15:55:34.160564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.662 [2024-12-15 15:55:34.199263] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.662 [2024-12-15 15:55:34.199302] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.662 [2024-12-15 15:55:34.199312] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.662 [2024-12-15 15:55:34.199320] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.662 [2024-12-15 15:55:34.199327] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.662 [2024-12-15 15:55:34.199373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.662 [2024-12-15 15:55:34.199467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.662 [2024-12-15 15:55:34.199470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.922 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:06.182 [2024-12-15 15:55:34.538791] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17122c0/0x17167b0) succeed. 00:08:06.182 [2024-12-15 15:55:34.549222] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1713860/0x1757e50) succeed. 00:08:06.182 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.441 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:06.441 15:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.701 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:06.701 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:06.701 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:06.961 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ebc232b0-e5bb-4072-8b35-fc880a138da7 00:08:06.961 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ebc232b0-e5bb-4072-8b35-fc880a138da7 lvol 20 00:08:07.220 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7696bfc5-a881-4115-832f-63a89f29b45e 00:08:07.220 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.479 15:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7696bfc5-a881-4115-832f-63a89f29b45e 00:08:07.479 15:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:07.739 [2024-12-15 15:55:36.209877] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:07.739 15:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:08.049 15:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2670667 00:08:08.049 15:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:08.049 15:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:09.068 15:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7696bfc5-a881-4115-832f-63a89f29b45e MY_SNAPSHOT 00:08:09.328 15:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=608239e9-2302-467a-b3b6-d519a1185a35 00:08:09.328 15:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7696bfc5-a881-4115-832f-63a89f29b45e 30 00:08:09.328 15:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 608239e9-2302-467a-b3b6-d519a1185a35 MY_CLONE 00:08:09.587 15:55:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b9b41bf0-7c24-4f2e-bfa2-362300f0d3d6 00:08:09.587 15:55:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b9b41bf0-7c24-4f2e-bfa2-362300f0d3d6 00:08:09.846 15:55:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2670667 00:08:19.826 Initializing NVMe Controllers 00:08:19.826 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:19.826 Controller IO queue size 128, less than required. 00:08:19.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:19.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:19.826 Initialization complete. Launching workers. 00:08:19.826 ======================================================== 00:08:19.826 Latency(us) 00:08:19.826 Device Information : IOPS MiB/s Average min max 00:08:19.826 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16876.10 65.92 7586.10 2108.58 47878.72 00:08:19.826 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16843.70 65.80 7600.03 3502.94 50432.25 00:08:19.826 ======================================================== 00:08:19.826 Total : 33719.80 131.72 7593.06 2108.58 50432.25 00:08:19.826 00:08:19.826 15:55:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.826 15:55:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7696bfc5-a881-4115-832f-63a89f29b45e 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebc232b0-e5bb-4072-8b35-fc880a138da7 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:19.826 rmmod nvme_rdma 00:08:19.826 rmmod nvme_fabrics 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2670351 ']' 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2670351 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2670351 ']' 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2670351 00:08:19.826 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2670351 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2670351' 00:08:20.085 killing process with pid 2670351 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2670351 00:08:20.085 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2670351 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:08:20.344 00:08:20.344 real 0m21.831s 00:08:20.344 user 1m10.151s 00:08:20.344 sys 0m6.418s 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 ************************************ 00:08:20.344 END TEST nvmf_lvol 00:08:20.344 ************************************ 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 ************************************ 00:08:20.344 START TEST nvmf_lvs_grow 00:08:20.344 ************************************ 00:08:20.344 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:20.344 * Looking for test storage... 00:08:20.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:20.345 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.605 --rc genhtml_branch_coverage=1 00:08:20.605 --rc genhtml_function_coverage=1 00:08:20.605 --rc genhtml_legend=1 00:08:20.605 --rc geninfo_all_blocks=1 00:08:20.605 --rc geninfo_unexecuted_blocks=1 00:08:20.605 00:08:20.605 ' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.605 --rc genhtml_branch_coverage=1 00:08:20.605 --rc genhtml_function_coverage=1 00:08:20.605 --rc genhtml_legend=1 00:08:20.605 --rc geninfo_all_blocks=1 00:08:20.605 --rc geninfo_unexecuted_blocks=1 00:08:20.605 00:08:20.605 ' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.605 --rc genhtml_branch_coverage=1 00:08:20.605 --rc genhtml_function_coverage=1 00:08:20.605 --rc genhtml_legend=1 00:08:20.605 --rc geninfo_all_blocks=1 00:08:20.605 --rc geninfo_unexecuted_blocks=1 00:08:20.605 00:08:20.605 ' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.605 --rc genhtml_branch_coverage=1 00:08:20.605 --rc genhtml_function_coverage=1 00:08:20.605 --rc genhtml_legend=1 00:08:20.605 --rc geninfo_all_blocks=1 00:08:20.605 --rc geninfo_unexecuted_blocks=1 00:08:20.605 00:08:20.605 ' 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.605 15:55:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.605 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.605 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.606 15:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:27.175 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:27.175 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:27.175 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:27.176 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:27.176 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # rdma_device_init 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@526 -- # allocate_nic_ips 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:27.176 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.176 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:27.176 altname enp217s0f0np0 00:08:27.176 altname ens818f0np0 00:08:27.176 inet 192.168.100.8/24 scope global mlx_0_0 00:08:27.176 valid_lft forever preferred_lft forever 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:27.176 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.176 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:27.176 altname enp217s0f1np1 00:08:27.176 altname ens818f1np1 00:08:27.176 inet 192.168.100.9/24 scope global mlx_0_1 00:08:27.176 valid_lft forever preferred_lft forever 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.176 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.177 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:08:27.436 192.168.100.9' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:08:27.436 192.168.100.9' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # head -n 1 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # tail -n +2 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:08:27.436 192.168.100.9' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # head -n 1 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2676234 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2676234 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2676234 ']' 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.436 15:55:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.436 [2024-12-15 15:55:55.851648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.436 [2024-12-15 15:55:55.851705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.436 [2024-12-15 15:55:55.920612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.436 [2024-12-15 15:55:55.957490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.436 [2024-12-15 15:55:55.957532] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.436 [2024-12-15 15:55:55.957542] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.436 [2024-12-15 15:55:55.957570] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.436 [2024-12-15 15:55:55.957578] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.436 [2024-12-15 15:55:55.957601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.695 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:27.955 [2024-12-15 15:55:56.280011] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ab1bd0/0x1ab60c0) succeed. 00:08:27.955 [2024-12-15 15:55:56.288938] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ab30d0/0x1af7760) succeed. 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.955 ************************************ 00:08:27.955 START TEST lvs_grow_clean 00:08:27.955 ************************************ 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.955 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.214 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:28.214 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:28.473 15:55:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a lvol 150 00:08:28.733 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=18d1d32b-697e-41a6-ad4b-d43263e5d90c 00:08:28.733 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.733 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:28.992 [2024-12-15 15:55:57.342594] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:28.992 [2024-12-15 15:55:57.342644] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:28.992 true 00:08:28.992 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:28.992 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.992 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.992 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.251 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18d1d32b-697e-41a6-ad4b-d43263e5d90c 00:08:29.510 15:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:29.510 [2024-12-15 15:55:58.060995] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:29.510 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2676599 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2676599 /var/tmp/bdevperf.sock 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2676599 ']' 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.769 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.769 [2024-12-15 15:55:58.303413] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:29.769 [2024-12-15 15:55:58.303465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676599 ] 00:08:30.029 [2024-12-15 15:55:58.373205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.029 [2024-12-15 15:55:58.412522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.029 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.029 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:30.029 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:30.288 Nvme0n1 00:08:30.288 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.547 [ 00:08:30.547 { 00:08:30.547 "name": "Nvme0n1", 00:08:30.547 "aliases": [ 00:08:30.547 "18d1d32b-697e-41a6-ad4b-d43263e5d90c" 00:08:30.547 ], 00:08:30.547 "product_name": "NVMe disk", 00:08:30.547 "block_size": 4096, 00:08:30.547 "num_blocks": 38912, 00:08:30.548 "uuid": "18d1d32b-697e-41a6-ad4b-d43263e5d90c", 00:08:30.548 "numa_id": 1, 00:08:30.548 "assigned_rate_limits": { 00:08:30.548 "rw_ios_per_sec": 0, 00:08:30.548 "rw_mbytes_per_sec": 0, 00:08:30.548 "r_mbytes_per_sec": 0, 00:08:30.548 "w_mbytes_per_sec": 0 00:08:30.548 }, 00:08:30.548 "claimed": false, 00:08:30.548 "zoned": false, 00:08:30.548 "supported_io_types": { 00:08:30.548 "read": true, 00:08:30.548 "write": true, 00:08:30.548 "unmap": true, 00:08:30.548 "flush": true, 00:08:30.548 "reset": true, 00:08:30.548 "nvme_admin": true, 00:08:30.548 "nvme_io": true, 00:08:30.548 "nvme_io_md": false, 00:08:30.548 "write_zeroes": true, 00:08:30.548 "zcopy": false, 00:08:30.548 "get_zone_info": false, 00:08:30.548 "zone_management": false, 00:08:30.548 "zone_append": false, 00:08:30.548 "compare": true, 00:08:30.548 "compare_and_write": true, 00:08:30.548 "abort": true, 00:08:30.548 "seek_hole": false, 00:08:30.548 "seek_data": false, 00:08:30.548 "copy": true, 00:08:30.548 "nvme_iov_md": false 00:08:30.548 }, 00:08:30.548 "memory_domains": [ 00:08:30.548 { 00:08:30.548 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:30.548 "dma_device_type": 0 00:08:30.548 } 00:08:30.548 ], 00:08:30.548 "driver_specific": { 00:08:30.548 "nvme": [ 00:08:30.548 { 00:08:30.548 "trid": { 00:08:30.548 "trtype": "RDMA", 00:08:30.548 "adrfam": "IPv4", 00:08:30.548 "traddr": "192.168.100.8", 00:08:30.548 "trsvcid": "4420", 00:08:30.548 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.548 }, 00:08:30.548 "ctrlr_data": { 00:08:30.548 "cntlid": 1, 00:08:30.548 "vendor_id": "0x8086", 00:08:30.548 "model_number": "SPDK bdev Controller", 00:08:30.548 "serial_number": "SPDK0", 00:08:30.548 "firmware_revision": "24.09.1", 00:08:30.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.548 "oacs": { 00:08:30.548 "security": 0, 00:08:30.548 "format": 0, 00:08:30.548 "firmware": 0, 00:08:30.548 "ns_manage": 0 00:08:30.548 }, 00:08:30.548 "multi_ctrlr": true, 00:08:30.548 "ana_reporting": false 00:08:30.548 }, 00:08:30.548 "vs": { 00:08:30.548 "nvme_version": "1.3" 00:08:30.548 }, 00:08:30.548 "ns_data": { 00:08:30.548 "id": 1, 00:08:30.548 "can_share": true 00:08:30.548 } 00:08:30.548 } 00:08:30.548 ], 00:08:30.548 "mp_policy": "active_passive" 00:08:30.548 } 00:08:30.548 } 00:08:30.548 ] 00:08:30.548 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2676816 00:08:30.548 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.548 15:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.548 Running I/O for 10 seconds... 00:08:31.926 Latency(us) 00:08:31.926 [2024-12-15T14:56:00.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.926 Nvme0n1 : 1.00 34691.00 135.51 0.00 0.00 0.00 0.00 0.00 00:08:31.926 [2024-12-15T14:56:00.496Z] =================================================================================================================== 00:08:31.926 [2024-12-15T14:56:00.496Z] Total : 34691.00 135.51 0.00 0.00 0.00 0.00 0.00 00:08:31.926 00:08:32.493 15:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:32.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.493 Nvme0n1 : 2.00 34768.50 135.81 0.00 0.00 0.00 0.00 0.00 00:08:32.493 [2024-12-15T14:56:01.063Z] =================================================================================================================== 00:08:32.493 [2024-12-15T14:56:01.063Z] Total : 34768.50 135.81 0.00 0.00 0.00 0.00 0.00 00:08:32.493 00:08:32.754 true 00:08:32.754 15:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:32.754 15:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.013 15:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:33.013 15:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:33.013 15:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2676816 00:08:33.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.581 Nvme0n1 : 3.00 35018.67 136.79 0.00 0.00 0.00 0.00 0.00 00:08:33.581 [2024-12-15T14:56:02.151Z] =================================================================================================================== 00:08:33.581 [2024-12-15T14:56:02.151Z] Total : 35018.67 136.79 0.00 0.00 0.00 0.00 0.00 00:08:33.581 00:08:34.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.519 Nvme0n1 : 4.00 35216.75 137.57 0.00 0.00 0.00 0.00 0.00 00:08:34.519 [2024-12-15T14:56:03.089Z] =================================================================================================================== 00:08:34.519 [2024-12-15T14:56:03.089Z] Total : 35216.75 137.57 0.00 0.00 0.00 0.00 0.00 00:08:34.519 00:08:35.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.898 Nvme0n1 : 5.00 35353.60 138.10 0.00 0.00 0.00 0.00 0.00 00:08:35.898 [2024-12-15T14:56:04.468Z] =================================================================================================================== 00:08:35.898 [2024-12-15T14:56:04.468Z] Total : 35353.60 138.10 0.00 0.00 0.00 0.00 0.00 00:08:35.898 00:08:36.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.835 Nvme0n1 : 6.00 35434.67 138.42 0.00 0.00 0.00 0.00 0.00 00:08:36.835 [2024-12-15T14:56:05.405Z] =================================================================================================================== 00:08:36.835 [2024-12-15T14:56:05.405Z] Total : 35434.67 138.42 0.00 0.00 0.00 0.00 0.00 00:08:36.835 00:08:37.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.772 Nvme0n1 : 7.00 35497.00 138.66 0.00 0.00 0.00 0.00 0.00 00:08:37.772 [2024-12-15T14:56:06.342Z] =================================================================================================================== 00:08:37.772 [2024-12-15T14:56:06.342Z] Total : 35497.00 138.66 0.00 0.00 0.00 0.00 0.00 00:08:37.772 00:08:38.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.711 Nvme0n1 : 8.00 35544.25 138.84 0.00 0.00 0.00 0.00 0.00 00:08:38.711 [2024-12-15T14:56:07.281Z] =================================================================================================================== 00:08:38.711 [2024-12-15T14:56:07.281Z] Total : 35544.25 138.84 0.00 0.00 0.00 0.00 0.00 00:08:38.711 00:08:39.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.649 Nvme0n1 : 9.00 35580.67 138.99 0.00 0.00 0.00 0.00 0.00 00:08:39.649 [2024-12-15T14:56:08.219Z] =================================================================================================================== 00:08:39.649 [2024-12-15T14:56:08.219Z] Total : 35580.67 138.99 0.00 0.00 0.00 0.00 0.00 00:08:39.649 00:08:40.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.586 Nvme0n1 : 10.00 35616.10 139.13 0.00 0.00 0.00 0.00 0.00 00:08:40.586 [2024-12-15T14:56:09.156Z] =================================================================================================================== 00:08:40.586 [2024-12-15T14:56:09.156Z] Total : 35616.10 139.13 0.00 0.00 0.00 0.00 0.00 00:08:40.586 00:08:40.586 00:08:40.586 Latency(us) 00:08:40.586 [2024-12-15T14:56:09.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.586 Nvme0n1 : 10.00 35615.77 139.12 0.00 0.00 3591.09 2136.47 11901.34 00:08:40.586 [2024-12-15T14:56:09.156Z] =================================================================================================================== 00:08:40.586 [2024-12-15T14:56:09.156Z] Total : 35615.77 139.12 0.00 0.00 3591.09 2136.47 11901.34 00:08:40.586 { 00:08:40.586 "results": [ 00:08:40.586 { 00:08:40.586 "job": "Nvme0n1", 00:08:40.586 "core_mask": "0x2", 00:08:40.586 "workload": "randwrite", 00:08:40.586 "status": "finished", 00:08:40.586 "queue_depth": 128, 00:08:40.586 "io_size": 4096, 00:08:40.586 "runtime": 10.003601, 00:08:40.586 "iops": 35615.7747595091, 00:08:40.587 "mibps": 139.12412015433242, 00:08:40.587 "io_failed": 0, 00:08:40.587 "io_timeout": 0, 00:08:40.587 "avg_latency_us": 3591.092472218386, 00:08:40.587 "min_latency_us": 2136.4736, 00:08:40.587 "max_latency_us": 11901.3376 00:08:40.587 } 00:08:40.587 ], 00:08:40.587 "core_count": 1 00:08:40.587 } 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2676599 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2676599 ']' 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2676599 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.587 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2676599 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2676599' 00:08:40.846 killing process with pid 2676599 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2676599 00:08:40.846 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.846 00:08:40.846 Latency(us) 00:08:40.846 [2024-12-15T14:56:09.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.846 [2024-12-15T14:56:09.416Z] =================================================================================================================== 00:08:40.846 [2024-12-15T14:56:09.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2676599 00:08:40.846 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:41.105 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.364 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:41.364 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.364 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.364 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:41.364 15:56:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.623 [2024-12-15 15:56:10.099162] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.623 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.624 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.624 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.624 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.624 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.624 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:41.883 request: 00:08:41.883 { 00:08:41.883 "uuid": "952c58e3-14c6-40d0-935d-fc30edb3fa4a", 00:08:41.883 "method": "bdev_lvol_get_lvstores", 00:08:41.883 "req_id": 1 00:08:41.883 } 00:08:41.883 Got JSON-RPC error response 00:08:41.883 response: 00:08:41.883 { 00:08:41.883 "code": -19, 00:08:41.883 "message": "No such device" 00:08:41.883 } 00:08:41.883 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:41.883 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.883 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:41.883 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.883 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.142 aio_bdev 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18d1d32b-697e-41a6-ad4b-d43263e5d90c 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=18d1d32b-697e-41a6-ad4b-d43263e5d90c 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.142 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.402 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18d1d32b-697e-41a6-ad4b-d43263e5d90c -t 2000 00:08:42.402 [ 00:08:42.402 { 00:08:42.402 "name": "18d1d32b-697e-41a6-ad4b-d43263e5d90c", 00:08:42.402 "aliases": [ 00:08:42.402 "lvs/lvol" 00:08:42.402 ], 00:08:42.402 "product_name": "Logical Volume", 00:08:42.402 "block_size": 4096, 00:08:42.402 "num_blocks": 38912, 00:08:42.402 "uuid": "18d1d32b-697e-41a6-ad4b-d43263e5d90c", 00:08:42.402 "assigned_rate_limits": { 00:08:42.402 "rw_ios_per_sec": 0, 00:08:42.402 "rw_mbytes_per_sec": 0, 00:08:42.402 "r_mbytes_per_sec": 0, 00:08:42.402 "w_mbytes_per_sec": 0 00:08:42.402 }, 00:08:42.402 "claimed": false, 00:08:42.402 "zoned": false, 00:08:42.402 "supported_io_types": { 00:08:42.402 "read": true, 00:08:42.402 "write": true, 00:08:42.402 "unmap": true, 00:08:42.402 "flush": false, 00:08:42.402 "reset": true, 00:08:42.402 "nvme_admin": false, 00:08:42.402 "nvme_io": false, 00:08:42.402 "nvme_io_md": false, 00:08:42.402 "write_zeroes": true, 00:08:42.402 "zcopy": false, 00:08:42.402 "get_zone_info": false, 00:08:42.402 "zone_management": false, 00:08:42.402 "zone_append": false, 00:08:42.402 "compare": false, 00:08:42.402 "compare_and_write": false, 00:08:42.402 "abort": false, 00:08:42.402 "seek_hole": true, 00:08:42.402 "seek_data": true, 00:08:42.402 "copy": false, 00:08:42.402 "nvme_iov_md": false 00:08:42.402 }, 00:08:42.402 "driver_specific": { 00:08:42.402 "lvol": { 00:08:42.402 "lvol_store_uuid": "952c58e3-14c6-40d0-935d-fc30edb3fa4a", 00:08:42.402 "base_bdev": "aio_bdev", 00:08:42.402 "thin_provision": false, 00:08:42.402 "num_allocated_clusters": 38, 00:08:42.402 "snapshot": false, 00:08:42.402 "clone": false, 00:08:42.402 "esnap_clone": false 00:08:42.402 } 00:08:42.402 } 00:08:42.402 } 00:08:42.402 ] 00:08:42.402 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:42.402 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:42.402 15:56:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.661 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.661 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.661 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:42.982 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.982 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18d1d32b-697e-41a6-ad4b-d43263e5d90c 00:08:42.982 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 952c58e3-14c6-40d0-935d-fc30edb3fa4a 00:08:43.241 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.500 00:08:43.500 real 0m15.488s 00:08:43.500 user 0m15.359s 00:08:43.500 sys 0m1.108s 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 ************************************ 00:08:43.500 END TEST lvs_grow_clean 00:08:43.500 ************************************ 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 ************************************ 00:08:43.500 START TEST lvs_grow_dirty 00:08:43.500 ************************************ 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.500 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.501 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.501 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.501 15:56:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.760 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.760 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.019 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c8205f9e-7d88-480f-ab64-71a41617fc21 lvol 150 00:08:44.278 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:44.278 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.278 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.538 [2024-12-15 15:56:12.939150] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.538 [2024-12-15 15:56:12.939199] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.538 true 00:08:44.538 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:44.538 15:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:44.797 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:44.797 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.797 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:45.056 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:45.316 [2024-12-15 15:56:13.681541] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2679391 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2679391 /var/tmp/bdevperf.sock 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2679391 ']' 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.316 15:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.575 [2024-12-15 15:56:13.927751] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.575 [2024-12-15 15:56:13.927805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679391 ] 00:08:45.575 [2024-12-15 15:56:13.998724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.575 [2024-12-15 15:56:14.038303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.575 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.575 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:45.575 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.835 Nvme0n1 00:08:46.094 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:46.094 [ 00:08:46.094 { 00:08:46.094 "name": "Nvme0n1", 00:08:46.094 "aliases": [ 00:08:46.094 "7b0a0d66-2541-45eb-b274-49f0dc8c6e10" 00:08:46.094 ], 00:08:46.094 "product_name": "NVMe disk", 00:08:46.094 "block_size": 4096, 00:08:46.094 "num_blocks": 38912, 00:08:46.094 "uuid": "7b0a0d66-2541-45eb-b274-49f0dc8c6e10", 00:08:46.094 "numa_id": 1, 00:08:46.094 "assigned_rate_limits": { 00:08:46.094 "rw_ios_per_sec": 0, 00:08:46.094 "rw_mbytes_per_sec": 0, 00:08:46.094 "r_mbytes_per_sec": 0, 00:08:46.094 "w_mbytes_per_sec": 0 00:08:46.094 }, 00:08:46.094 "claimed": false, 00:08:46.094 "zoned": false, 00:08:46.094 "supported_io_types": { 00:08:46.094 "read": true, 00:08:46.094 "write": true, 00:08:46.094 "unmap": true, 00:08:46.094 "flush": true, 00:08:46.094 "reset": true, 00:08:46.094 "nvme_admin": true, 00:08:46.094 "nvme_io": true, 00:08:46.094 "nvme_io_md": false, 00:08:46.094 "write_zeroes": true, 00:08:46.094 "zcopy": false, 00:08:46.094 "get_zone_info": false, 00:08:46.094 "zone_management": false, 00:08:46.094 "zone_append": false, 00:08:46.094 "compare": true, 00:08:46.094 "compare_and_write": true, 00:08:46.094 "abort": true, 00:08:46.094 "seek_hole": false, 00:08:46.094 "seek_data": false, 00:08:46.094 "copy": true, 00:08:46.094 "nvme_iov_md": false 00:08:46.094 }, 00:08:46.094 "memory_domains": [ 00:08:46.094 { 00:08:46.094 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:46.094 "dma_device_type": 0 00:08:46.094 } 00:08:46.094 ], 00:08:46.094 "driver_specific": { 00:08:46.094 "nvme": [ 00:08:46.094 { 00:08:46.094 "trid": { 00:08:46.094 "trtype": "RDMA", 00:08:46.094 "adrfam": "IPv4", 00:08:46.094 "traddr": "192.168.100.8", 00:08:46.094 "trsvcid": "4420", 00:08:46.094 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:46.094 }, 00:08:46.094 "ctrlr_data": { 00:08:46.094 "cntlid": 1, 00:08:46.094 "vendor_id": "0x8086", 00:08:46.094 "model_number": "SPDK bdev Controller", 00:08:46.094 "serial_number": "SPDK0", 00:08:46.094 "firmware_revision": "24.09.1", 00:08:46.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.094 "oacs": { 00:08:46.094 "security": 0, 00:08:46.094 "format": 0, 00:08:46.094 "firmware": 0, 00:08:46.094 "ns_manage": 0 00:08:46.094 }, 00:08:46.094 "multi_ctrlr": true, 00:08:46.094 "ana_reporting": false 00:08:46.094 }, 00:08:46.094 "vs": { 00:08:46.094 "nvme_version": "1.3" 00:08:46.094 }, 00:08:46.094 "ns_data": { 00:08:46.094 "id": 1, 00:08:46.094 "can_share": true 00:08:46.094 } 00:08:46.094 } 00:08:46.094 ], 00:08:46.094 "mp_policy": "active_passive" 00:08:46.094 } 00:08:46.094 } 00:08:46.094 ] 00:08:46.094 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2679558 00:08:46.094 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:46.094 15:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.353 Running I/O for 10 seconds... 00:08:47.289 Latency(us) 00:08:47.289 [2024-12-15T14:56:15.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.289 Nvme0n1 : 1.00 34848.00 136.12 0.00 0.00 0.00 0.00 0.00 00:08:47.289 [2024-12-15T14:56:15.859Z] =================================================================================================================== 00:08:47.289 [2024-12-15T14:56:15.859Z] Total : 34848.00 136.12 0.00 0.00 0.00 0.00 0.00 00:08:47.289 00:08:48.227 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:48.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.227 Nvme0n1 : 2.00 35200.50 137.50 0.00 0.00 0.00 0.00 0.00 00:08:48.227 [2024-12-15T14:56:16.797Z] =================================================================================================================== 00:08:48.227 [2024-12-15T14:56:16.797Z] Total : 35200.50 137.50 0.00 0.00 0.00 0.00 0.00 00:08:48.227 00:08:48.227 true 00:08:48.486 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:48.486 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.486 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.486 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.486 15:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2679558 00:08:49.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.424 Nvme0n1 : 3.00 35285.67 137.83 0.00 0.00 0.00 0.00 0.00 00:08:49.424 [2024-12-15T14:56:17.994Z] =================================================================================================================== 00:08:49.424 [2024-12-15T14:56:17.994Z] Total : 35285.67 137.83 0.00 0.00 0.00 0.00 0.00 00:08:49.424 00:08:50.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.361 Nvme0n1 : 4.00 35416.75 138.35 0.00 0.00 0.00 0.00 0.00 00:08:50.361 [2024-12-15T14:56:18.931Z] =================================================================================================================== 00:08:50.361 [2024-12-15T14:56:18.931Z] Total : 35416.75 138.35 0.00 0.00 0.00 0.00 0.00 00:08:50.361 00:08:51.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.299 Nvme0n1 : 5.00 35501.40 138.68 0.00 0.00 0.00 0.00 0.00 00:08:51.299 [2024-12-15T14:56:19.869Z] =================================================================================================================== 00:08:51.299 [2024-12-15T14:56:19.869Z] Total : 35501.40 138.68 0.00 0.00 0.00 0.00 0.00 00:08:51.299 00:08:52.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.237 Nvme0n1 : 6.00 35435.33 138.42 0.00 0.00 0.00 0.00 0.00 00:08:52.237 [2024-12-15T14:56:20.807Z] =================================================================================================================== 00:08:52.237 [2024-12-15T14:56:20.807Z] Total : 35435.33 138.42 0.00 0.00 0.00 0.00 0.00 00:08:52.237 00:08:53.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.175 Nvme0n1 : 7.00 35497.43 138.66 0.00 0.00 0.00 0.00 0.00 00:08:53.175 [2024-12-15T14:56:21.745Z] =================================================================================================================== 00:08:53.175 [2024-12-15T14:56:21.745Z] Total : 35497.43 138.66 0.00 0.00 0.00 0.00 0.00 00:08:53.175 00:08:54.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.554 Nvme0n1 : 8.00 35552.12 138.88 0.00 0.00 0.00 0.00 0.00 00:08:54.554 [2024-12-15T14:56:23.124Z] =================================================================================================================== 00:08:54.554 [2024-12-15T14:56:23.124Z] Total : 35552.12 138.88 0.00 0.00 0.00 0.00 0.00 00:08:54.554 00:08:55.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.492 Nvme0n1 : 9.00 35584.11 139.00 0.00 0.00 0.00 0.00 0.00 00:08:55.492 [2024-12-15T14:56:24.062Z] =================================================================================================================== 00:08:55.492 [2024-12-15T14:56:24.062Z] Total : 35584.11 139.00 0.00 0.00 0.00 0.00 0.00 00:08:55.492 00:08:56.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.428 Nvme0n1 : 10.00 35612.90 139.11 0.00 0.00 0.00 0.00 0.00 00:08:56.428 [2024-12-15T14:56:24.998Z] =================================================================================================================== 00:08:56.428 [2024-12-15T14:56:24.998Z] Total : 35612.90 139.11 0.00 0.00 0.00 0.00 0.00 00:08:56.428 00:08:56.428 00:08:56.428 Latency(us) 00:08:56.428 [2024-12-15T14:56:24.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.428 Nvme0n1 : 10.00 35613.40 139.11 0.00 0.00 3591.38 2267.55 9437.18 00:08:56.428 [2024-12-15T14:56:24.998Z] =================================================================================================================== 00:08:56.428 [2024-12-15T14:56:24.998Z] Total : 35613.40 139.11 0.00 0.00 3591.38 2267.55 9437.18 00:08:56.428 { 00:08:56.428 "results": [ 00:08:56.428 { 00:08:56.428 "job": "Nvme0n1", 00:08:56.428 "core_mask": "0x2", 00:08:56.428 "workload": "randwrite", 00:08:56.428 "status": "finished", 00:08:56.428 "queue_depth": 128, 00:08:56.428 "io_size": 4096, 00:08:56.428 "runtime": 10.003398, 00:08:56.428 "iops": 35613.39856716688, 00:08:56.428 "mibps": 139.11483815299562, 00:08:56.428 "io_failed": 0, 00:08:56.428 "io_timeout": 0, 00:08:56.428 "avg_latency_us": 3591.3825129887296, 00:08:56.428 "min_latency_us": 2267.5456, 00:08:56.428 "max_latency_us": 9437.184 00:08:56.428 } 00:08:56.428 ], 00:08:56.428 "core_count": 1 00:08:56.428 } 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2679391 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2679391 ']' 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2679391 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2679391 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2679391' 00:08:56.428 killing process with pid 2679391 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2679391 00:08:56.428 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.428 00:08:56.428 Latency(us) 00:08:56.428 [2024-12-15T14:56:24.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.428 [2024-12-15T14:56:24.998Z] =================================================================================================================== 00:08:56.428 [2024-12-15T14:56:24.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2679391 00:08:56.428 15:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:56.687 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.946 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:56.946 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.204 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.204 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2676234 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2676234 00:08:57.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2676234 Killed "${NVMF_APP[@]}" "$@" 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2681453 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2681453 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2681453 ']' 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.205 15:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 [2024-12-15 15:56:25.667657] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.205 [2024-12-15 15:56:25.667723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.205 [2024-12-15 15:56:25.739976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.464 [2024-12-15 15:56:25.777411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.464 [2024-12-15 15:56:25.777449] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.464 [2024-12-15 15:56:25.777458] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.464 [2024-12-15 15:56:25.777466] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.464 [2024-12-15 15:56:25.777489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.464 [2024-12-15 15:56:25.777514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.032 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.292 [2024-12-15 15:56:26.711079] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:58.292 [2024-12-15 15:56:26.711200] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:58.292 [2024-12-15 15:56:26.711230] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.292 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.551 15:56:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 -t 2000 00:08:58.551 [ 00:08:58.551 { 00:08:58.551 "name": "7b0a0d66-2541-45eb-b274-49f0dc8c6e10", 00:08:58.551 "aliases": [ 00:08:58.551 "lvs/lvol" 00:08:58.551 ], 00:08:58.551 "product_name": "Logical Volume", 00:08:58.551 "block_size": 4096, 00:08:58.551 "num_blocks": 38912, 00:08:58.551 "uuid": "7b0a0d66-2541-45eb-b274-49f0dc8c6e10", 00:08:58.551 "assigned_rate_limits": { 00:08:58.551 "rw_ios_per_sec": 0, 00:08:58.551 "rw_mbytes_per_sec": 0, 00:08:58.551 "r_mbytes_per_sec": 0, 00:08:58.551 "w_mbytes_per_sec": 0 00:08:58.551 }, 00:08:58.551 "claimed": false, 00:08:58.551 "zoned": false, 00:08:58.551 "supported_io_types": { 00:08:58.551 "read": true, 00:08:58.551 "write": true, 00:08:58.551 "unmap": true, 00:08:58.551 "flush": false, 00:08:58.551 "reset": true, 00:08:58.551 "nvme_admin": false, 00:08:58.551 "nvme_io": false, 00:08:58.551 "nvme_io_md": false, 00:08:58.551 "write_zeroes": true, 00:08:58.551 "zcopy": false, 00:08:58.551 "get_zone_info": false, 00:08:58.551 "zone_management": false, 00:08:58.551 "zone_append": false, 00:08:58.551 "compare": false, 00:08:58.551 "compare_and_write": false, 00:08:58.551 "abort": false, 00:08:58.551 "seek_hole": true, 00:08:58.551 "seek_data": true, 00:08:58.551 "copy": false, 00:08:58.551 "nvme_iov_md": false 00:08:58.551 }, 00:08:58.551 "driver_specific": { 00:08:58.551 "lvol": { 00:08:58.551 "lvol_store_uuid": "c8205f9e-7d88-480f-ab64-71a41617fc21", 00:08:58.551 "base_bdev": "aio_bdev", 00:08:58.551 "thin_provision": false, 00:08:58.551 "num_allocated_clusters": 38, 00:08:58.551 "snapshot": false, 00:08:58.551 "clone": false, 00:08:58.551 "esnap_clone": false 00:08:58.551 } 00:08:58.551 } 00:08:58.551 } 00:08:58.551 ] 00:08:58.551 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:58.551 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:58.551 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:58.810 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:58.810 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:58.810 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:59.070 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:59.070 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.070 [2024-12-15 15:56:27.631682] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:08:59.329 request: 00:08:59.329 { 00:08:59.329 "uuid": "c8205f9e-7d88-480f-ab64-71a41617fc21", 00:08:59.329 "method": "bdev_lvol_get_lvstores", 00:08:59.329 "req_id": 1 00:08:59.329 } 00:08:59.329 Got JSON-RPC error response 00:08:59.329 response: 00:08:59.329 { 00:08:59.329 "code": -19, 00:08:59.329 "message": "No such device" 00:08:59.329 } 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.329 15:56:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.588 aio_bdev 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.588 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.848 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 -t 2000 00:08:59.848 [ 00:08:59.848 { 00:08:59.848 "name": "7b0a0d66-2541-45eb-b274-49f0dc8c6e10", 00:08:59.848 "aliases": [ 00:08:59.848 "lvs/lvol" 00:08:59.848 ], 00:08:59.848 "product_name": "Logical Volume", 00:08:59.848 "block_size": 4096, 00:08:59.848 "num_blocks": 38912, 00:08:59.848 "uuid": "7b0a0d66-2541-45eb-b274-49f0dc8c6e10", 00:08:59.848 "assigned_rate_limits": { 00:08:59.848 "rw_ios_per_sec": 0, 00:08:59.848 "rw_mbytes_per_sec": 0, 00:08:59.848 "r_mbytes_per_sec": 0, 00:08:59.848 "w_mbytes_per_sec": 0 00:08:59.848 }, 00:08:59.848 "claimed": false, 00:08:59.848 "zoned": false, 00:08:59.848 "supported_io_types": { 00:08:59.848 "read": true, 00:08:59.848 "write": true, 00:08:59.848 "unmap": true, 00:08:59.848 "flush": false, 00:08:59.848 "reset": true, 00:08:59.848 "nvme_admin": false, 00:08:59.848 "nvme_io": false, 00:08:59.848 "nvme_io_md": false, 00:08:59.848 "write_zeroes": true, 00:08:59.848 "zcopy": false, 00:08:59.848 "get_zone_info": false, 00:08:59.848 "zone_management": false, 00:08:59.848 "zone_append": false, 00:08:59.848 "compare": false, 00:08:59.848 "compare_and_write": false, 00:08:59.848 "abort": false, 00:08:59.848 "seek_hole": true, 00:08:59.848 "seek_data": true, 00:08:59.848 "copy": false, 00:08:59.848 "nvme_iov_md": false 00:08:59.848 }, 00:08:59.848 "driver_specific": { 00:08:59.848 "lvol": { 00:08:59.848 "lvol_store_uuid": "c8205f9e-7d88-480f-ab64-71a41617fc21", 00:08:59.848 "base_bdev": "aio_bdev", 00:08:59.848 "thin_provision": false, 00:08:59.848 "num_allocated_clusters": 38, 00:08:59.848 "snapshot": false, 00:08:59.848 "clone": false, 00:08:59.848 "esnap_clone": false 00:08:59.848 } 00:08:59.848 } 00:08:59.848 } 00:08:59.848 ] 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:09:00.107 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.366 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.366 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b0a0d66-2541-45eb-b274-49f0dc8c6e10 00:09:00.626 15:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8205f9e-7d88-480f-ab64-71a41617fc21 00:09:00.626 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.885 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.885 00:09:00.885 real 0m17.458s 00:09:00.885 user 0m44.234s 00:09:00.885 sys 0m3.264s 00:09:00.885 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.885 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.885 ************************************ 00:09:00.885 END TEST lvs_grow_dirty 00:09:00.885 ************************************ 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:01.144 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:01.145 nvmf_trace.0 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:01.145 rmmod nvme_rdma 00:09:01.145 rmmod nvme_fabrics 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2681453 ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2681453 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2681453 ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2681453 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2681453 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2681453' 00:09:01.145 killing process with pid 2681453 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2681453 00:09:01.145 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2681453 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:01.405 00:09:01.405 real 0m40.991s 00:09:01.405 user 1m6.084s 00:09:01.405 sys 0m9.985s 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 ************************************ 00:09:01.405 END TEST nvmf_lvs_grow 00:09:01.405 ************************************ 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.405 ************************************ 00:09:01.405 START TEST nvmf_bdev_io_wait 00:09:01.405 ************************************ 00:09:01.405 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:01.665 * Looking for test storage... 00:09:01.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:01.665 15:56:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.665 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:01.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.666 --rc genhtml_branch_coverage=1 00:09:01.666 --rc genhtml_function_coverage=1 00:09:01.666 --rc genhtml_legend=1 00:09:01.666 --rc geninfo_all_blocks=1 00:09:01.666 --rc geninfo_unexecuted_blocks=1 00:09:01.666 00:09:01.666 ' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:01.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.666 --rc genhtml_branch_coverage=1 00:09:01.666 --rc genhtml_function_coverage=1 00:09:01.666 --rc genhtml_legend=1 00:09:01.666 --rc geninfo_all_blocks=1 00:09:01.666 --rc geninfo_unexecuted_blocks=1 00:09:01.666 00:09:01.666 ' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:01.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.666 --rc genhtml_branch_coverage=1 00:09:01.666 --rc genhtml_function_coverage=1 00:09:01.666 --rc genhtml_legend=1 00:09:01.666 --rc geninfo_all_blocks=1 00:09:01.666 --rc geninfo_unexecuted_blocks=1 00:09:01.666 00:09:01.666 ' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:01.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.666 --rc genhtml_branch_coverage=1 00:09:01.666 --rc genhtml_function_coverage=1 00:09:01.666 --rc genhtml_legend=1 00:09:01.666 --rc geninfo_all_blocks=1 00:09:01.666 --rc geninfo_unexecuted_blocks=1 00:09:01.666 00:09:01.666 ' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.666 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.666 15:56:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:08.240 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:08.241 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:08.241 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:08.241 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:08.241 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # rdma_device_init 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:08.241 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:08.241 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:08.241 altname enp217s0f0np0 00:09:08.241 altname ens818f0np0 00:09:08.241 inet 192.168.100.8/24 scope global mlx_0_0 00:09:08.241 valid_lft forever preferred_lft forever 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:08.241 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:08.241 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:08.241 altname enp217s0f1np1 00:09:08.241 altname ens818f1np1 00:09:08.241 inet 192.168.100.9/24 scope global mlx_0_1 00:09:08.241 valid_lft forever preferred_lft forever 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:08.241 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:08.501 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:08.502 192.168.100.9' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:08.502 192.168.100.9' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # head -n 1 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:08.502 192.168.100.9' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # tail -n +2 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # head -n 1 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2685516 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2685516 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2685516 ']' 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.502 15:56:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 [2024-12-15 15:56:36.970365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:08.502 [2024-12-15 15:56:36.970419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.502 [2024-12-15 15:56:37.039831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.762 [2024-12-15 15:56:37.080873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.762 [2024-12-15 15:56:37.080912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.762 [2024-12-15 15:56:37.080921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.762 [2024-12-15 15:56:37.080930] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.762 [2024-12-15 15:56:37.080952] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.762 [2024-12-15 15:56:37.081000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.762 [2024-12-15 15:56:37.081095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.762 [2024-12-15 15:56:37.081181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.762 [2024-12-15 15:56:37.081183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.762 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.762 [2024-12-15 15:56:37.276124] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf5fd90/0xf64280) succeed. 00:09:08.762 [2024-12-15 15:56:37.286855] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf613d0/0xfa5920) succeed. 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 Malloc0 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 [2024-12-15 15:56:37.465770] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2685765 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2685767 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:09.022 { 00:09:09.022 "params": { 00:09:09.022 "name": "Nvme$subsystem", 00:09:09.022 "trtype": "$TEST_TRANSPORT", 00:09:09.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.022 "adrfam": "ipv4", 00:09:09.022 "trsvcid": "$NVMF_PORT", 00:09:09.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.022 "hdgst": ${hdgst:-false}, 00:09:09.022 "ddgst": ${ddgst:-false} 00:09:09.022 }, 00:09:09.022 "method": "bdev_nvme_attach_controller" 00:09:09.022 } 00:09:09.022 EOF 00:09:09.022 )") 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2685769 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:09.022 { 00:09:09.022 "params": { 00:09:09.022 "name": "Nvme$subsystem", 00:09:09.022 "trtype": "$TEST_TRANSPORT", 00:09:09.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.022 "adrfam": "ipv4", 00:09:09.022 "trsvcid": "$NVMF_PORT", 00:09:09.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.022 "hdgst": ${hdgst:-false}, 00:09:09.022 "ddgst": ${ddgst:-false} 00:09:09.022 }, 00:09:09.022 "method": "bdev_nvme_attach_controller" 00:09:09.022 } 00:09:09.022 EOF 00:09:09.022 )") 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2685772 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:09.022 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:09.022 { 00:09:09.022 "params": { 00:09:09.022 "name": "Nvme$subsystem", 00:09:09.022 "trtype": "$TEST_TRANSPORT", 00:09:09.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.022 "adrfam": "ipv4", 00:09:09.022 "trsvcid": "$NVMF_PORT", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.023 "hdgst": ${hdgst:-false}, 00:09:09.023 "ddgst": ${ddgst:-false} 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 } 00:09:09.023 EOF 00:09:09.023 )") 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:09.023 { 00:09:09.023 "params": { 00:09:09.023 "name": "Nvme$subsystem", 00:09:09.023 "trtype": "$TEST_TRANSPORT", 00:09:09.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.023 "adrfam": "ipv4", 00:09:09.023 "trsvcid": "$NVMF_PORT", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.023 "hdgst": ${hdgst:-false}, 00:09:09.023 "ddgst": ${ddgst:-false} 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 } 00:09:09.023 EOF 00:09:09.023 )") 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2685765 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:09.023 "params": { 00:09:09.023 "name": "Nvme1", 00:09:09.023 "trtype": "rdma", 00:09:09.023 "traddr": "192.168.100.8", 00:09:09.023 "adrfam": "ipv4", 00:09:09.023 "trsvcid": "4420", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.023 "hdgst": false, 00:09:09.023 "ddgst": false 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 }' 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:09.023 "params": { 00:09:09.023 "name": "Nvme1", 00:09:09.023 "trtype": "rdma", 00:09:09.023 "traddr": "192.168.100.8", 00:09:09.023 "adrfam": "ipv4", 00:09:09.023 "trsvcid": "4420", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.023 "hdgst": false, 00:09:09.023 "ddgst": false 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 }' 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:09.023 "params": { 00:09:09.023 "name": "Nvme1", 00:09:09.023 "trtype": "rdma", 00:09:09.023 "traddr": "192.168.100.8", 00:09:09.023 "adrfam": "ipv4", 00:09:09.023 "trsvcid": "4420", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.023 "hdgst": false, 00:09:09.023 "ddgst": false 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 }' 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:09.023 15:56:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:09.023 "params": { 00:09:09.023 "name": "Nvme1", 00:09:09.023 "trtype": "rdma", 00:09:09.023 "traddr": "192.168.100.8", 00:09:09.023 "adrfam": "ipv4", 00:09:09.023 "trsvcid": "4420", 00:09:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.023 "hdgst": false, 00:09:09.023 "ddgst": false 00:09:09.023 }, 00:09:09.023 "method": "bdev_nvme_attach_controller" 00:09:09.023 }' 00:09:09.023 [2024-12-15 15:56:37.517364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.023 [2024-12-15 15:56:37.517417] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:09.023 [2024-12-15 15:56:37.519769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.023 [2024-12-15 15:56:37.519814] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:09.023 [2024-12-15 15:56:37.520810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.023 [2024-12-15 15:56:37.520854] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:09.023 [2024-12-15 15:56:37.523027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.023 [2024-12-15 15:56:37.523075] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:09.282 [2024-12-15 15:56:37.701188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.282 [2024-12-15 15:56:37.726899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.282 [2024-12-15 15:56:37.789164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.282 [2024-12-15 15:56:37.814879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.541 [2024-12-15 15:56:37.889807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.541 [2024-12-15 15:56:37.921633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.541 [2024-12-15 15:56:37.936004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.541 [2024-12-15 15:56:37.961345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.800 Running I/O for 1 seconds... 00:09:09.800 Running I/O for 1 seconds... 00:09:09.800 Running I/O for 1 seconds... 00:09:10.059 Running I/O for 1 seconds... 00:09:10.997 18242.00 IOPS, 71.26 MiB/s 00:09:10.997 Latency(us) 00:09:10.997 [2024-12-15T14:56:39.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.997 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.997 Nvme1n1 : 1.01 18275.77 71.39 0.00 0.00 6982.02 4141.88 13369.34 00:09:10.997 [2024-12-15T14:56:39.567Z] =================================================================================================================== 00:09:10.997 [2024-12-15T14:56:39.567Z] Total : 18275.77 71.39 0.00 0.00 6982.02 4141.88 13369.34 00:09:10.997 14237.00 IOPS, 55.61 MiB/s 00:09:10.997 Latency(us) 00:09:10.997 [2024-12-15T14:56:39.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.997 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.997 Nvme1n1 : 1.01 14293.32 55.83 0.00 0.00 8926.68 4718.59 18559.80 00:09:10.997 [2024-12-15T14:56:39.567Z] =================================================================================================================== 00:09:10.997 [2024-12-15T14:56:39.567Z] Total : 14293.32 55.83 0.00 0.00 8926.68 4718.59 18559.80 00:09:10.997 19029.00 IOPS, 74.33 MiB/s 00:09:10.997 Latency(us) 00:09:10.997 [2024-12-15T14:56:39.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.997 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.997 Nvme1n1 : 1.01 19118.06 74.68 0.00 0.00 6680.64 2647.65 16986.93 00:09:10.997 [2024-12-15T14:56:39.567Z] =================================================================================================================== 00:09:10.997 [2024-12-15T14:56:39.567Z] Total : 19118.06 74.68 0.00 0.00 6680.64 2647.65 16986.93 00:09:10.997 263720.00 IOPS, 1030.16 MiB/s 00:09:10.997 Latency(us) 00:09:10.997 [2024-12-15T14:56:39.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.997 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.997 Nvme1n1 : 1.00 263327.74 1028.62 0.00 0.00 483.36 224.46 1966.08 00:09:10.997 [2024-12-15T14:56:39.567Z] =================================================================================================================== 00:09:10.997 [2024-12-15T14:56:39.567Z] Total : 263327.74 1028.62 0.00 0.00 483.36 224.46 1966.08 00:09:10.997 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2685767 00:09:10.997 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2685769 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2685772 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:11.257 rmmod nvme_rdma 00:09:11.257 rmmod nvme_fabrics 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2685516 ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2685516 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2685516 ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2685516 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2685516 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2685516' 00:09:11.257 killing process with pid 2685516 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2685516 00:09:11.257 15:56:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2685516 00:09:11.516 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:11.516 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:11.516 00:09:11.516 real 0m10.148s 00:09:11.516 user 0m19.000s 00:09:11.516 sys 0m6.666s 00:09:11.516 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.516 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 ************************************ 00:09:11.516 END TEST nvmf_bdev_io_wait 00:09:11.516 ************************************ 00:09:11.517 15:56:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:11.517 15:56:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.517 15:56:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.517 15:56:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.776 ************************************ 00:09:11.776 START TEST nvmf_queue_depth 00:09:11.776 ************************************ 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:11.776 * Looking for test storage... 00:09:11.776 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.776 --rc genhtml_branch_coverage=1 00:09:11.776 --rc genhtml_function_coverage=1 00:09:11.776 --rc genhtml_legend=1 00:09:11.776 --rc geninfo_all_blocks=1 00:09:11.776 --rc geninfo_unexecuted_blocks=1 00:09:11.776 00:09:11.776 ' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.776 --rc genhtml_branch_coverage=1 00:09:11.776 --rc genhtml_function_coverage=1 00:09:11.776 --rc genhtml_legend=1 00:09:11.776 --rc geninfo_all_blocks=1 00:09:11.776 --rc geninfo_unexecuted_blocks=1 00:09:11.776 00:09:11.776 ' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.776 --rc genhtml_branch_coverage=1 00:09:11.776 --rc genhtml_function_coverage=1 00:09:11.776 --rc genhtml_legend=1 00:09:11.776 --rc geninfo_all_blocks=1 00:09:11.776 --rc geninfo_unexecuted_blocks=1 00:09:11.776 00:09:11.776 ' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.776 --rc genhtml_branch_coverage=1 00:09:11.776 --rc genhtml_function_coverage=1 00:09:11.776 --rc genhtml_legend=1 00:09:11.776 --rc geninfo_all_blocks=1 00:09:11.776 --rc geninfo_unexecuted_blocks=1 00:09:11.776 00:09:11.776 ' 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.776 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.777 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.777 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.777 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:11.777 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.036 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.037 15:56:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:18.611 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:18.611 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:18.611 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:18.611 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # rdma_device_init 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:18.611 15:56:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:18.611 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:18.611 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:18.612 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.612 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:18.612 altname enp217s0f0np0 00:09:18.612 altname ens818f0np0 00:09:18.612 inet 192.168.100.8/24 scope global mlx_0_0 00:09:18.612 valid_lft forever preferred_lft forever 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:18.612 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.612 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:18.612 altname enp217s0f1np1 00:09:18.612 altname ens818f1np1 00:09:18.612 inet 192.168.100.9/24 scope global mlx_0_1 00:09:18.612 valid_lft forever preferred_lft forever 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:18.612 192.168.100.9' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:18.612 192.168.100.9' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # head -n 1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:18.612 192.168.100.9' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # tail -n +2 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # head -n 1 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:18.612 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2689498 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2689498 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2689498 ']' 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.873 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.873 [2024-12-15 15:56:47.258541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:18.873 [2024-12-15 15:56:47.258596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.873 [2024-12-15 15:56:47.334542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.873 [2024-12-15 15:56:47.372826] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.873 [2024-12-15 15:56:47.372865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.873 [2024-12-15 15:56:47.372874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.873 [2024-12-15 15:56:47.372885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.873 [2024-12-15 15:56:47.372892] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.873 [2024-12-15 15:56:47.372913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.132 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 [2024-12-15 15:56:47.534389] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e7ed0/0x14ec3c0) succeed. 00:09:19.133 [2024-12-15 15:56:47.544618] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e93d0/0x152da60) succeed. 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 Malloc0 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 [2024-12-15 15:56:47.632617] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2689533 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2689533 /var/tmp/bdevperf.sock 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2689533 ']' 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.133 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.133 [2024-12-15 15:56:47.682529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:19.133 [2024-12-15 15:56:47.682590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689533 ] 00:09:19.392 [2024-12-15 15:56:47.752854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.392 [2024-12-15 15:56:47.792211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.392 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.392 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:19.392 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:19.392 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.392 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.652 NVMe0n1 00:09:19.652 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.652 15:56:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.652 Running I/O for 10 seconds... 00:09:21.527 17408.00 IOPS, 68.00 MiB/s [2024-12-15T14:56:51.535Z] 17883.00 IOPS, 69.86 MiB/s [2024-12-15T14:56:52.127Z] 17795.00 IOPS, 69.51 MiB/s [2024-12-15T14:56:53.506Z] 17920.00 IOPS, 70.00 MiB/s [2024-12-15T14:56:54.444Z] 17997.40 IOPS, 70.30 MiB/s [2024-12-15T14:56:55.381Z] 18015.17 IOPS, 70.37 MiB/s [2024-12-15T14:56:56.318Z] 18016.71 IOPS, 70.38 MiB/s [2024-12-15T14:56:57.255Z] 18048.00 IOPS, 70.50 MiB/s [2024-12-15T14:56:58.193Z] 18090.67 IOPS, 70.67 MiB/s [2024-12-15T14:56:58.193Z] 18115.60 IOPS, 70.76 MiB/s 00:09:29.623 Latency(us) 00:09:29.623 [2024-12-15T14:56:58.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:29.623 Verification LBA range: start 0x0 length 0x4000 00:09:29.623 NVMe0n1 : 10.04 18124.17 70.80 0.00 0.00 56337.39 16252.93 36700.16 00:09:29.623 [2024-12-15T14:56:58.193Z] =================================================================================================================== 00:09:29.623 [2024-12-15T14:56:58.193Z] Total : 18124.17 70.80 0.00 0.00 56337.39 16252.93 36700.16 00:09:29.623 { 00:09:29.623 "results": [ 00:09:29.623 { 00:09:29.623 "job": "NVMe0n1", 00:09:29.623 "core_mask": "0x1", 00:09:29.623 "workload": "verify", 00:09:29.623 "status": "finished", 00:09:29.623 "verify_range": { 00:09:29.623 "start": 0, 00:09:29.623 "length": 16384 00:09:29.623 }, 00:09:29.623 "queue_depth": 1024, 00:09:29.623 "io_size": 4096, 00:09:29.623 "runtime": 10.041176, 00:09:29.623 "iops": 18124.171909744437, 00:09:29.623 "mibps": 70.7975465224392, 00:09:29.623 "io_failed": 0, 00:09:29.623 "io_timeout": 0, 00:09:29.623 "avg_latency_us": 56337.388819552936, 00:09:29.623 "min_latency_us": 16252.928, 00:09:29.623 "max_latency_us": 36700.16 00:09:29.623 } 00:09:29.623 ], 00:09:29.623 "core_count": 1 00:09:29.623 } 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2689533 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2689533 ']' 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2689533 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.623 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2689533 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2689533' 00:09:29.882 killing process with pid 2689533 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2689533 00:09:29.882 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.882 00:09:29.882 Latency(us) 00:09:29.882 [2024-12-15T14:56:58.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.882 [2024-12-15T14:56:58.452Z] =================================================================================================================== 00:09:29.882 [2024-12-15T14:56:58.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2689533 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:29.882 rmmod nvme_rdma 00:09:29.882 rmmod nvme_fabrics 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2689498 ']' 00:09:29.882 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2689498 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2689498 ']' 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2689498 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2689498 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2689498' 00:09:30.142 killing process with pid 2689498 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2689498 00:09:30.142 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2689498 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:30.402 00:09:30.402 real 0m18.628s 00:09:30.402 user 0m24.223s 00:09:30.402 sys 0m5.976s 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.402 ************************************ 00:09:30.402 END TEST nvmf_queue_depth 00:09:30.402 ************************************ 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.402 ************************************ 00:09:30.402 START TEST nvmf_target_multipath 00:09:30.402 ************************************ 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:30.402 * Looking for test storage... 00:09:30.402 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.402 15:56:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.662 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.663 --rc genhtml_branch_coverage=1 00:09:30.663 --rc genhtml_function_coverage=1 00:09:30.663 --rc genhtml_legend=1 00:09:30.663 --rc geninfo_all_blocks=1 00:09:30.663 --rc geninfo_unexecuted_blocks=1 00:09:30.663 00:09:30.663 ' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.663 --rc genhtml_branch_coverage=1 00:09:30.663 --rc genhtml_function_coverage=1 00:09:30.663 --rc genhtml_legend=1 00:09:30.663 --rc geninfo_all_blocks=1 00:09:30.663 --rc geninfo_unexecuted_blocks=1 00:09:30.663 00:09:30.663 ' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.663 --rc genhtml_branch_coverage=1 00:09:30.663 --rc genhtml_function_coverage=1 00:09:30.663 --rc genhtml_legend=1 00:09:30.663 --rc geninfo_all_blocks=1 00:09:30.663 --rc geninfo_unexecuted_blocks=1 00:09:30.663 00:09:30.663 ' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.663 --rc genhtml_branch_coverage=1 00:09:30.663 --rc genhtml_function_coverage=1 00:09:30.663 --rc genhtml_legend=1 00:09:30.663 --rc geninfo_all_blocks=1 00:09:30.663 --rc geninfo_unexecuted_blocks=1 00:09:30.663 00:09:30.663 ' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.663 15:56:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.233 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:37.234 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:37.234 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:37.234 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:37.234 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # rdma_device_init 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:37.234 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.234 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:37.234 altname enp217s0f0np0 00:09:37.234 altname ens818f0np0 00:09:37.234 inet 192.168.100.8/24 scope global mlx_0_0 00:09:37.234 valid_lft forever preferred_lft forever 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:37.234 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:37.494 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.494 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:37.494 altname enp217s0f1np1 00:09:37.494 altname ens818f1np1 00:09:37.494 inet 192.168.100.9/24 scope global mlx_0_1 00:09:37.494 valid_lft forever preferred_lft forever 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:37.494 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:37.495 192.168.100.9' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:37.495 192.168.100.9' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # head -n 1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:37.495 192.168.100.9' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # tail -n +2 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # head -n 1 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:37.495 run this test only with TCP transport for now 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:37.495 rmmod nvme_rdma 00:09:37.495 rmmod nvme_fabrics 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:37.495 00:09:37.495 real 0m7.166s 00:09:37.495 user 0m2.103s 00:09:37.495 sys 0m5.278s 00:09:37.495 15:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.495 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:37.495 ************************************ 00:09:37.495 END TEST nvmf_target_multipath 00:09:37.495 ************************************ 00:09:37.495 15:57:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:37.495 15:57:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.495 15:57:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.495 15:57:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.755 ************************************ 00:09:37.755 START TEST nvmf_zcopy 00:09:37.755 ************************************ 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:37.755 * Looking for test storage... 00:09:37.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.755 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.756 --rc genhtml_branch_coverage=1 00:09:37.756 --rc genhtml_function_coverage=1 00:09:37.756 --rc genhtml_legend=1 00:09:37.756 --rc geninfo_all_blocks=1 00:09:37.756 --rc geninfo_unexecuted_blocks=1 00:09:37.756 00:09:37.756 ' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.756 --rc genhtml_branch_coverage=1 00:09:37.756 --rc genhtml_function_coverage=1 00:09:37.756 --rc genhtml_legend=1 00:09:37.756 --rc geninfo_all_blocks=1 00:09:37.756 --rc geninfo_unexecuted_blocks=1 00:09:37.756 00:09:37.756 ' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.756 --rc genhtml_branch_coverage=1 00:09:37.756 --rc genhtml_function_coverage=1 00:09:37.756 --rc genhtml_legend=1 00:09:37.756 --rc geninfo_all_blocks=1 00:09:37.756 --rc geninfo_unexecuted_blocks=1 00:09:37.756 00:09:37.756 ' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.756 --rc genhtml_branch_coverage=1 00:09:37.756 --rc genhtml_function_coverage=1 00:09:37.756 --rc genhtml_legend=1 00:09:37.756 --rc geninfo_all_blocks=1 00:09:37.756 --rc geninfo_unexecuted_blocks=1 00:09:37.756 00:09:37.756 ' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.756 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.756 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.016 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:38.016 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:38.016 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.016 15:57:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:44.648 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:44.648 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:44.648 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:44.648 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # rdma_device_init 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:44.648 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.649 15:57:12 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:44.649 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:44.649 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:44.649 altname enp217s0f0np0 00:09:44.649 altname ens818f0np0 00:09:44.649 inet 192.168.100.8/24 scope global mlx_0_0 00:09:44.649 valid_lft forever preferred_lft forever 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:44.649 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:44.649 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:44.649 altname enp217s0f1np1 00:09:44.649 altname ens818f1np1 00:09:44.649 inet 192.168.100.9/24 scope global mlx_0_1 00:09:44.649 valid_lft forever preferred_lft forever 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:44.649 192.168.100.9' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:44.649 192.168.100.9' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # head -n 1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:44.649 192.168.100.9' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # head -n 1 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # tail -n +2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=2698789 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 2698789 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2698789 ']' 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.649 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.650 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.650 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.909 [2024-12-15 15:57:13.229282] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.909 [2024-12-15 15:57:13.229339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.909 [2024-12-15 15:57:13.299930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.909 [2024-12-15 15:57:13.337541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.909 [2024-12-15 15:57:13.337581] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.909 [2024-12-15 15:57:13.337591] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.909 [2024-12-15 15:57:13.337600] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.909 [2024-12-15 15:57:13.337607] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.909 [2024-12-15 15:57:13.337630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:44.909 Unsupported transport: rdma 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:44.909 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.169 nvmf_trace.0 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:45.169 rmmod nvme_rdma 00:09:45.169 rmmod nvme_fabrics 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 2698789 ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 2698789 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2698789 ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2698789 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2698789 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2698789' 00:09:45.169 killing process with pid 2698789 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2698789 00:09:45.169 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2698789 00:09:45.428 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:45.428 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:45.428 00:09:45.428 real 0m7.715s 00:09:45.428 user 0m2.708s 00:09:45.428 sys 0m5.612s 00:09:45.428 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.428 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.428 ************************************ 00:09:45.428 END TEST nvmf_zcopy 00:09:45.428 ************************************ 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.429 ************************************ 00:09:45.429 START TEST nvmf_nmic 00:09:45.429 ************************************ 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:45.429 * Looking for test storage... 00:09:45.429 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:45.429 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:45.688 15:57:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.688 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.689 --rc genhtml_branch_coverage=1 00:09:45.689 --rc genhtml_function_coverage=1 00:09:45.689 --rc genhtml_legend=1 00:09:45.689 --rc geninfo_all_blocks=1 00:09:45.689 --rc geninfo_unexecuted_blocks=1 00:09:45.689 00:09:45.689 ' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.689 --rc genhtml_branch_coverage=1 00:09:45.689 --rc genhtml_function_coverage=1 00:09:45.689 --rc genhtml_legend=1 00:09:45.689 --rc geninfo_all_blocks=1 00:09:45.689 --rc geninfo_unexecuted_blocks=1 00:09:45.689 00:09:45.689 ' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.689 --rc genhtml_branch_coverage=1 00:09:45.689 --rc genhtml_function_coverage=1 00:09:45.689 --rc genhtml_legend=1 00:09:45.689 --rc geninfo_all_blocks=1 00:09:45.689 --rc geninfo_unexecuted_blocks=1 00:09:45.689 00:09:45.689 ' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.689 --rc genhtml_branch_coverage=1 00:09:45.689 --rc genhtml_function_coverage=1 00:09:45.689 --rc genhtml_legend=1 00:09:45.689 --rc geninfo_all_blocks=1 00:09:45.689 --rc geninfo_unexecuted_blocks=1 00:09:45.689 00:09:45.689 ' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.689 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.689 15:57:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:53.815 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:53.816 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:53.816 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:53.816 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:53.816 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # rdma_device_init 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:53.816 15:57:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:53.816 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.816 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:53.816 altname enp217s0f0np0 00:09:53.816 altname ens818f0np0 00:09:53.816 inet 192.168.100.8/24 scope global mlx_0_0 00:09:53.816 valid_lft forever preferred_lft forever 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:53.816 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.816 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:53.816 altname enp217s0f1np1 00:09:53.816 altname ens818f1np1 00:09:53.816 inet 192.168.100.9/24 scope global mlx_0_1 00:09:53.816 valid_lft forever preferred_lft forever 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.816 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:53.817 192.168.100.9' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:53.817 192.168.100.9' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # head -n 1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:53.817 192.168.100.9' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # tail -n +2 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # head -n 1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=2702246 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 2702246 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2702246 ']' 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 [2024-12-15 15:57:21.247619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:53.817 [2024-12-15 15:57:21.247675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.817 [2024-12-15 15:57:21.318904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.817 [2024-12-15 15:57:21.360295] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.817 [2024-12-15 15:57:21.360339] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.817 [2024-12-15 15:57:21.360349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.817 [2024-12-15 15:57:21.360358] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.817 [2024-12-15 15:57:21.360364] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.817 [2024-12-15 15:57:21.360461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.817 [2024-12-15 15:57:21.360577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.817 [2024-12-15 15:57:21.360648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.817 [2024-12-15 15:57:21.360649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 [2024-12-15 15:57:21.540784] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x82fe40/0x834330) succeed. 00:09:53.817 [2024-12-15 15:57:21.551490] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x831480/0x8759d0) succeed. 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 Malloc0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 [2024-12-15 15:57:21.716850] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:53.817 test case1: single bdev can't be used in multiple subsystems 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.817 [2024-12-15 15:57:21.740653] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:53.817 [2024-12-15 15:57:21.740673] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:53.817 [2024-12-15 15:57:21.740683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.817 request: 00:09:53.817 { 00:09:53.817 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:53.817 "namespace": { 00:09:53.817 "bdev_name": "Malloc0", 00:09:53.817 "no_auto_visible": false 00:09:53.817 }, 00:09:53.817 "method": "nvmf_subsystem_add_ns", 00:09:53.817 "req_id": 1 00:09:53.817 } 00:09:53.817 Got JSON-RPC error response 00:09:53.817 response: 00:09:53.817 { 00:09:53.817 "code": -32602, 00:09:53.817 "message": "Invalid parameters" 00:09:53.817 } 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:53.817 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:53.818 Adding namespace failed - expected result. 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:53.818 test case2: host connect to nvmf target in multiple paths 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.818 [2024-12-15 15:57:21.752725] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.818 15:57:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:54.386 15:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:55.323 15:57:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.323 15:57:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.323 15:57:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.323 15:57:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:55.323 15:57:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:57.228 15:57:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.514 [global] 00:09:57.514 thread=1 00:09:57.514 invalidate=1 00:09:57.514 rw=write 00:09:57.514 time_based=1 00:09:57.514 runtime=1 00:09:57.514 ioengine=libaio 00:09:57.514 direct=1 00:09:57.514 bs=4096 00:09:57.514 iodepth=1 00:09:57.514 norandommap=0 00:09:57.514 numjobs=1 00:09:57.514 00:09:57.514 verify_dump=1 00:09:57.514 verify_backlog=512 00:09:57.514 verify_state_save=0 00:09:57.514 do_verify=1 00:09:57.514 verify=crc32c-intel 00:09:57.514 [job0] 00:09:57.514 filename=/dev/nvme0n1 00:09:57.514 Could not set queue depth (nvme0n1) 00:09:57.776 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.776 fio-3.35 00:09:57.776 Starting 1 thread 00:09:59.149 00:09:59.149 job0: (groupid=0, jobs=1): err= 0: pid=2703226: Sun Dec 15 15:57:27 2024 00:09:59.149 read: IOPS=6973, BW=27.2MiB/s (28.6MB/s)(27.3MiB/1001msec) 00:09:59.149 slat (nsec): min=8327, max=32442, avg=8945.65, stdev=847.34 00:09:59.149 clat (nsec): min=32712, max=94197, avg=58853.64, stdev=3467.66 00:09:59.149 lat (usec): min=59, max=103, avg=67.80, stdev= 3.52 00:09:59.149 clat percentiles (nsec): 00:09:59.149 | 1.00th=[52480], 5.00th=[54016], 10.00th=[54528], 20.00th=[56064], 00:09:59.149 | 30.00th=[57088], 40.00th=[57600], 50.00th=[58624], 60.00th=[59648], 00:09:59.149 | 70.00th=[60672], 80.00th=[61696], 90.00th=[63232], 95.00th=[64768], 00:09:59.149 | 99.00th=[68096], 99.50th=[70144], 99.90th=[75264], 99.95th=[78336], 00:09:59.149 | 99.99th=[93696] 00:09:59.149 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:09:59.149 slat (nsec): min=10762, max=45215, avg=11567.14, stdev=1128.39 00:09:59.149 clat (usec): min=30, max=130, avg=56.83, stdev= 3.79 00:09:59.149 lat (usec): min=59, max=142, avg=68.39, stdev= 3.93 00:09:59.149 clat percentiles (usec): 00:09:59.149 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:09:59.149 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:09:59.149 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:09:59.149 | 99.00th=[ 67], 99.50th=[ 69], 99.90th=[ 77], 99.95th=[ 85], 00:09:59.149 | 99.99th=[ 131] 00:09:59.149 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:09:59.149 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:09:59.149 lat (usec) : 50=0.40%, 100=99.59%, 250=0.01% 00:09:59.149 cpu : usr=12.30%, sys=17.50%, ctx=14148, majf=0, minf=1 00:09:59.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.149 issued rwts: total=6980,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.149 00:09:59.149 Run status group 0 (all jobs): 00:09:59.149 READ: bw=27.2MiB/s (28.6MB/s), 27.2MiB/s-27.2MiB/s (28.6MB/s-28.6MB/s), io=27.3MiB (28.6MB), run=1001-1001msec 00:09:59.149 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:59.149 00:09:59.149 Disk stats (read/write): 00:09:59.149 nvme0n1: ios=6193/6563, merge=0/0, ticks=327/315, in_queue=642, util=90.58% 00:09:59.149 15:57:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:01.048 rmmod nvme_rdma 00:10:01.048 rmmod nvme_fabrics 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 2702246 ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2702246 ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2702246' 00:10:01.048 killing process with pid 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2702246 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:01.048 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:01.307 00:10:01.307 real 0m15.733s 00:10:01.307 user 0m43.181s 00:10:01.307 sys 0m6.402s 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.307 ************************************ 00:10:01.307 END TEST nvmf_nmic 00:10:01.307 ************************************ 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.307 ************************************ 00:10:01.307 START TEST nvmf_fio_target 00:10:01.307 ************************************ 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:01.307 * Looking for test storage... 00:10:01.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.307 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.566 --rc genhtml_branch_coverage=1 00:10:01.566 --rc genhtml_function_coverage=1 00:10:01.566 --rc genhtml_legend=1 00:10:01.566 --rc geninfo_all_blocks=1 00:10:01.566 --rc geninfo_unexecuted_blocks=1 00:10:01.566 00:10:01.566 ' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.566 --rc genhtml_branch_coverage=1 00:10:01.566 --rc genhtml_function_coverage=1 00:10:01.566 --rc genhtml_legend=1 00:10:01.566 --rc geninfo_all_blocks=1 00:10:01.566 --rc geninfo_unexecuted_blocks=1 00:10:01.566 00:10:01.566 ' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.566 --rc genhtml_branch_coverage=1 00:10:01.566 --rc genhtml_function_coverage=1 00:10:01.566 --rc genhtml_legend=1 00:10:01.566 --rc geninfo_all_blocks=1 00:10:01.566 --rc geninfo_unexecuted_blocks=1 00:10:01.566 00:10:01.566 ' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.566 --rc genhtml_branch_coverage=1 00:10:01.566 --rc genhtml_function_coverage=1 00:10:01.566 --rc genhtml_legend=1 00:10:01.566 --rc geninfo_all_blocks=1 00:10:01.566 --rc geninfo_unexecuted_blocks=1 00:10:01.566 00:10:01.566 ' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.566 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.567 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.567 15:57:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.126 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.126 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:08.127 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:08.127 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:08.127 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:08.127 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # rdma_device_init 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:08.127 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:08.127 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:08.127 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:08.127 altname enp217s0f0np0 00:10:08.127 altname ens818f0np0 00:10:08.128 inet 192.168.100.8/24 scope global mlx_0_0 00:10:08.128 valid_lft forever preferred_lft forever 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:08.128 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:08.128 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:08.128 altname enp217s0f1np1 00:10:08.128 altname ens818f1np1 00:10:08.128 inet 192.168.100.9/24 scope global mlx_0_1 00:10:08.128 valid_lft forever preferred_lft forever 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:08.128 192.168.100.9' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:08.128 192.168.100.9' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # head -n 1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:08.128 192.168.100.9' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # tail -n +2 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # head -n 1 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=2707194 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 2707194 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2707194 ']' 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.128 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.128 [2024-12-15 15:57:36.500998] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.128 [2024-12-15 15:57:36.501048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.128 [2024-12-15 15:57:36.571578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.128 [2024-12-15 15:57:36.611540] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.128 [2024-12-15 15:57:36.611581] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.128 [2024-12-15 15:57:36.611590] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.128 [2024-12-15 15:57:36.611599] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.128 [2024-12-15 15:57:36.611607] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.128 [2024-12-15 15:57:36.612704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.128 [2024-12-15 15:57:36.612724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.128 [2024-12-15 15:57:36.612812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.128 [2024-12-15 15:57:36.612814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.387 15:57:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:08.647 [2024-12-15 15:57:36.960869] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1abbe40/0x1ac0330) succeed. 00:10:08.647 [2024-12-15 15:57:36.971315] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1abd480/0x1b019d0) succeed. 00:10:08.647 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.906 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:08.906 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.220 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:09.220 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.520 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:09.520 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.520 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:09.520 15:57:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:09.779 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.038 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:10.038 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.038 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:10.038 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.297 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:10.297 15:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:10.555 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.815 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:10.815 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.073 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:11.073 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.074 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:11.332 [2024-12-15 15:57:39.749203] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:11.332 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:11.591 15:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:11.850 15:57:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:12.786 15:57:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:14.689 15:57:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:14.689 [global] 00:10:14.689 thread=1 00:10:14.689 invalidate=1 00:10:14.689 rw=write 00:10:14.689 time_based=1 00:10:14.689 runtime=1 00:10:14.689 ioengine=libaio 00:10:14.689 direct=1 00:10:14.689 bs=4096 00:10:14.689 iodepth=1 00:10:14.689 norandommap=0 00:10:14.689 numjobs=1 00:10:14.689 00:10:14.689 verify_dump=1 00:10:14.689 verify_backlog=512 00:10:14.689 verify_state_save=0 00:10:14.689 do_verify=1 00:10:14.689 verify=crc32c-intel 00:10:14.689 [job0] 00:10:14.689 filename=/dev/nvme0n1 00:10:14.689 [job1] 00:10:14.689 filename=/dev/nvme0n2 00:10:14.689 [job2] 00:10:14.689 filename=/dev/nvme0n3 00:10:14.689 [job3] 00:10:14.689 filename=/dev/nvme0n4 00:10:14.978 Could not set queue depth (nvme0n1) 00:10:14.978 Could not set queue depth (nvme0n2) 00:10:14.978 Could not set queue depth (nvme0n3) 00:10:14.978 Could not set queue depth (nvme0n4) 00:10:15.241 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.241 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.241 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.241 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.241 fio-3.35 00:10:15.242 Starting 4 threads 00:10:16.634 00:10:16.634 job0: (groupid=0, jobs=1): err= 0: pid=2708523: Sun Dec 15 15:57:44 2024 00:10:16.634 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:16.634 slat (nsec): min=8304, max=20895, avg=8906.74, stdev=689.27 00:10:16.634 clat (usec): min=66, max=267, avg=96.33, stdev=21.70 00:10:16.634 lat (usec): min=74, max=276, avg=105.24, stdev=21.71 00:10:16.634 clat percentiles (usec): 00:10:16.634 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:10:16.634 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 90], 00:10:16.634 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 135], 00:10:16.634 | 99.00th=[ 145], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 200], 00:10:16.634 | 99.99th=[ 269] 00:10:16.634 write: IOPS=4786, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1001msec); 0 zone resets 00:10:16.634 slat (nsec): min=8654, max=55799, avg=11503.79, stdev=1227.47 00:10:16.634 clat (usec): min=65, max=171, avg=91.04, stdev=20.15 00:10:16.634 lat (usec): min=77, max=182, avg=102.54, stdev=20.13 00:10:16.634 clat percentiles (usec): 00:10:16.634 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:10:16.634 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 86], 00:10:16.634 | 70.00th=[ 99], 80.00th=[ 116], 90.00th=[ 123], 95.00th=[ 129], 00:10:16.634 | 99.00th=[ 141], 99.50th=[ 151], 99.90th=[ 165], 99.95th=[ 169], 00:10:16.634 | 99.99th=[ 172] 00:10:16.634 bw ( KiB/s): min=22400, max=22400, per=28.05%, avg=22400.00, stdev= 0.00, samples=1 00:10:16.634 iops : min= 5600, max= 5600, avg=5600.00, stdev= 0.00, samples=1 00:10:16.634 lat (usec) : 100=68.46%, 250=31.52%, 500=0.01% 00:10:16.634 cpu : usr=7.90%, sys=12.00%, ctx=9400, majf=0, minf=1 00:10:16.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.634 issued rwts: total=4608,4791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.634 job1: (groupid=0, jobs=1): err= 0: pid=2708543: Sun Dec 15 15:57:44 2024 00:10:16.634 read: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1002msec) 00:10:16.634 slat (nsec): min=8321, max=25478, avg=8808.01, stdev=825.80 00:10:16.634 clat (usec): min=67, max=121, avg=81.77, stdev= 6.16 00:10:16.634 lat (usec): min=75, max=129, avg=90.57, stdev= 6.21 00:10:16.634 clat percentiles (usec): 00:10:16.634 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 77], 00:10:16.634 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:10:16.634 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 90], 95.00th=[ 93], 00:10:16.634 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 109], 99.95th=[ 114], 00:10:16.634 | 99.99th=[ 122] 00:10:16.634 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:16.634 slat (nsec): min=10671, max=38666, avg=11535.96, stdev=1201.37 00:10:16.634 clat (usec): min=61, max=118, avg=77.84, stdev= 6.10 00:10:16.634 lat (usec): min=72, max=132, avg=89.38, stdev= 6.23 00:10:16.634 clat percentiles (usec): 00:10:16.634 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:10:16.634 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 79], 00:10:16.635 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 90], 00:10:16.635 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 110], 00:10:16.635 | 99.99th=[ 119] 00:10:16.635 bw ( KiB/s): min=22192, max=22864, per=28.21%, avg=22528.00, stdev=475.18, samples=2 00:10:16.635 iops : min= 5548, max= 5716, avg=5632.00, stdev=118.79, samples=2 00:10:16.635 lat (usec) : 100=99.24%, 250=0.76% 00:10:16.635 cpu : usr=9.19%, sys=13.79%, ctx=10803, majf=0, minf=1 00:10:16.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 issued rwts: total=5170,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.635 job2: (groupid=0, jobs=1): err= 0: pid=2708567: Sun Dec 15 15:57:44 2024 00:10:16.635 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:16.635 slat (nsec): min=8405, max=32046, avg=9162.33, stdev=970.06 00:10:16.635 clat (usec): min=68, max=193, avg=106.94, stdev=15.72 00:10:16.635 lat (usec): min=77, max=203, avg=116.11, stdev=15.79 00:10:16.635 clat percentiles (usec): 00:10:16.635 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 93], 00:10:16.635 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 110], 00:10:16.635 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 133], 00:10:16.635 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 169], 99.95th=[ 174], 00:10:16.635 | 99.99th=[ 194] 00:10:16.635 write: IOPS=4457, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1001msec); 0 zone resets 00:10:16.635 slat (nsec): min=10581, max=39830, avg=11798.15, stdev=1166.63 00:10:16.635 clat (usec): min=69, max=180, avg=100.78, stdev=15.18 00:10:16.635 lat (usec): min=80, max=192, avg=112.57, stdev=15.24 00:10:16.635 clat percentiles (usec): 00:10:16.635 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:10:16.635 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 101], 00:10:16.635 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 123], 95.00th=[ 128], 00:10:16.635 | 99.00th=[ 143], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 172], 00:10:16.635 | 99.99th=[ 182] 00:10:16.635 bw ( KiB/s): min=20480, max=20480, per=25.64%, avg=20480.00, stdev= 0.00, samples=1 00:10:16.635 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:16.635 lat (usec) : 100=51.18%, 250=48.82% 00:10:16.635 cpu : usr=6.50%, sys=12.00%, ctx=8558, majf=0, minf=1 00:10:16.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 issued rwts: total=4096,4462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.635 job3: (groupid=0, jobs=1): err= 0: pid=2708575: Sun Dec 15 15:57:44 2024 00:10:16.635 read: IOPS=4801, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:10:16.635 slat (nsec): min=8541, max=31560, avg=9132.31, stdev=839.09 00:10:16.635 clat (usec): min=69, max=134, avg=89.93, stdev= 6.74 00:10:16.635 lat (usec): min=82, max=143, avg=99.07, stdev= 6.79 00:10:16.635 clat percentiles (usec): 00:10:16.635 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:10:16.635 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:10:16.635 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 102], 00:10:16.635 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 120], 99.95th=[ 122], 00:10:16.635 | 99.99th=[ 135] 00:10:16.635 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:10:16.635 slat (nsec): min=10477, max=41308, avg=11751.88, stdev=1049.74 00:10:16.635 clat (usec): min=67, max=296, avg=85.55, stdev= 7.34 00:10:16.635 lat (usec): min=78, max=308, avg=97.30, stdev= 7.45 00:10:16.635 clat percentiles (usec): 00:10:16.635 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 81], 00:10:16.635 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:10:16.635 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 98], 00:10:16.635 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 116], 99.95th=[ 119], 00:10:16.635 | 99.99th=[ 297] 00:10:16.635 bw ( KiB/s): min=20480, max=20480, per=25.64%, avg=20480.00, stdev= 0.00, samples=1 00:10:16.635 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:16.635 lat (usec) : 100=94.70%, 250=5.29%, 500=0.01% 00:10:16.635 cpu : usr=9.10%, sys=12.20%, ctx=9926, majf=0, minf=1 00:10:16.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.635 issued rwts: total=4806,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.635 00:10:16.635 Run status group 0 (all jobs): 00:10:16.635 READ: bw=72.8MiB/s (76.4MB/s), 16.0MiB/s-20.2MiB/s (16.8MB/s-21.1MB/s), io=73.0MiB (76.5MB), run=1001-1002msec 00:10:16.635 WRITE: bw=78.0MiB/s (81.8MB/s), 17.4MiB/s-22.0MiB/s (18.3MB/s-23.0MB/s), io=78.1MiB (81.9MB), run=1001-1002msec 00:10:16.635 00:10:16.635 Disk stats (read/write): 00:10:16.635 nvme0n1: ios=3987/4096, merge=0/0, ticks=337/299, in_queue=636, util=82.67% 00:10:16.635 nvme0n2: ios=4199/4608, merge=0/0, ticks=306/314, in_queue=620, util=84.61% 00:10:16.635 nvme0n3: ios=3584/3609, merge=0/0, ticks=348/308, in_queue=656, util=88.06% 00:10:16.635 nvme0n4: ios=3999/4096, merge=0/0, ticks=336/324, in_queue=660, util=89.22% 00:10:16.635 15:57:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:16.635 [global] 00:10:16.635 thread=1 00:10:16.635 invalidate=1 00:10:16.635 rw=randwrite 00:10:16.635 time_based=1 00:10:16.635 runtime=1 00:10:16.635 ioengine=libaio 00:10:16.635 direct=1 00:10:16.635 bs=4096 00:10:16.635 iodepth=1 00:10:16.635 norandommap=0 00:10:16.635 numjobs=1 00:10:16.635 00:10:16.635 verify_dump=1 00:10:16.635 verify_backlog=512 00:10:16.635 verify_state_save=0 00:10:16.635 do_verify=1 00:10:16.635 verify=crc32c-intel 00:10:16.635 [job0] 00:10:16.635 filename=/dev/nvme0n1 00:10:16.635 [job1] 00:10:16.635 filename=/dev/nvme0n2 00:10:16.635 [job2] 00:10:16.635 filename=/dev/nvme0n3 00:10:16.635 [job3] 00:10:16.635 filename=/dev/nvme0n4 00:10:16.635 Could not set queue depth (nvme0n1) 00:10:16.635 Could not set queue depth (nvme0n2) 00:10:16.635 Could not set queue depth (nvme0n3) 00:10:16.635 Could not set queue depth (nvme0n4) 00:10:16.893 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.893 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.893 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.893 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.893 fio-3.35 00:10:16.893 Starting 4 threads 00:10:18.285 00:10:18.285 job0: (groupid=0, jobs=1): err= 0: pid=2708987: Sun Dec 15 15:57:46 2024 00:10:18.285 read: IOPS=5445, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1001msec) 00:10:18.285 slat (nsec): min=8281, max=29243, avg=8899.85, stdev=809.35 00:10:18.285 clat (usec): min=61, max=103, avg=79.30, stdev= 5.29 00:10:18.285 lat (usec): min=73, max=112, avg=88.19, stdev= 5.34 00:10:18.285 clat percentiles (usec): 00:10:18.285 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:10:18.285 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:10:18.285 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 89], 00:10:18.285 | 99.00th=[ 95], 99.50th=[ 98], 99.90th=[ 101], 99.95th=[ 102], 00:10:18.285 | 99.99th=[ 103] 00:10:18.285 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:10:18.285 slat (nsec): min=10425, max=72392, avg=11274.00, stdev=1417.72 00:10:18.285 clat (usec): min=58, max=116, avg=75.79, stdev= 5.28 00:10:18.285 lat (usec): min=72, max=143, avg=87.06, stdev= 5.43 00:10:18.285 clat percentiles (usec): 00:10:18.285 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 72], 00:10:18.285 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 77], 00:10:18.285 | 70.00th=[ 79], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 86], 00:10:18.285 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 105], 00:10:18.285 | 99.99th=[ 117] 00:10:18.285 bw ( KiB/s): min=23864, max=23864, per=31.96%, avg=23864.00, stdev= 0.00, samples=1 00:10:18.285 iops : min= 5966, max= 5966, avg=5966.00, stdev= 0.00, samples=1 00:10:18.285 lat (usec) : 100=99.88%, 250=0.12% 00:10:18.285 cpu : usr=10.10%, sys=13.30%, ctx=11084, majf=0, minf=1 00:10:18.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.285 issued rwts: total=5451,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.285 job1: (groupid=0, jobs=1): err= 0: pid=2708999: Sun Dec 15 15:57:46 2024 00:10:18.285 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:18.285 slat (nsec): min=8566, max=41390, avg=10173.17, stdev=2539.56 00:10:18.285 clat (usec): min=75, max=185, avg=121.31, stdev=11.83 00:10:18.285 lat (usec): min=84, max=194, avg=131.49, stdev=11.84 00:10:18.285 clat percentiles (usec): 00:10:18.285 | 1.00th=[ 90], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 114], 00:10:18.285 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:10:18.285 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:10:18.285 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 184], 00:10:18.285 | 99.99th=[ 186] 00:10:18.285 write: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:10:18.285 slat (nsec): min=10270, max=57644, avg=12511.05, stdev=3023.93 00:10:18.285 clat (usec): min=70, max=356, avg=117.18, stdev=12.44 00:10:18.285 lat (usec): min=83, max=368, avg=129.69, stdev=12.48 00:10:18.285 clat percentiles (usec): 00:10:18.285 | 1.00th=[ 87], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 109], 00:10:18.285 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 120], 00:10:18.285 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 135], 00:10:18.285 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 194], 99.95th=[ 206], 00:10:18.285 | 99.99th=[ 359] 00:10:18.285 bw ( KiB/s): min=16384, max=16384, per=21.94%, avg=16384.00, stdev= 0.00, samples=1 00:10:18.285 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:18.285 lat (usec) : 100=4.04%, 250=95.95%, 500=0.01% 00:10:18.285 cpu : usr=5.80%, sys=10.20%, ctx=7503, majf=0, minf=2 00:10:18.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.286 job2: (groupid=0, jobs=1): err= 0: pid=2709023: Sun Dec 15 15:57:46 2024 00:10:18.286 read: IOPS=3734, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec) 00:10:18.286 slat (nsec): min=8613, max=19435, avg=9234.58, stdev=743.40 00:10:18.286 clat (usec): min=74, max=356, avg=117.86, stdev=15.36 00:10:18.286 lat (usec): min=83, max=366, avg=127.09, stdev=15.35 00:10:18.286 clat percentiles (usec): 00:10:18.286 | 1.00th=[ 82], 5.00th=[ 87], 10.00th=[ 92], 20.00th=[ 110], 00:10:18.286 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:10:18.286 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 137], 00:10:18.286 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 200], 99.95th=[ 212], 00:10:18.286 | 99.99th=[ 359] 00:10:18.286 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:18.286 slat (nsec): min=10269, max=38972, avg=11461.72, stdev=1044.23 00:10:18.286 clat (usec): min=71, max=173, avg=112.06, stdev=16.48 00:10:18.286 lat (usec): min=82, max=184, avg=123.52, stdev=16.40 00:10:18.286 clat percentiles (usec): 00:10:18.286 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 96], 00:10:18.286 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 120], 00:10:18.286 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 135], 00:10:18.286 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 163], 00:10:18.286 | 99.99th=[ 174] 00:10:18.286 bw ( KiB/s): min=16384, max=16384, per=21.94%, avg=16384.00, stdev= 0.00, samples=1 00:10:18.286 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:18.286 lat (usec) : 100=18.27%, 250=81.72%, 500=0.01% 00:10:18.286 cpu : usr=5.20%, sys=11.50%, ctx=7834, majf=0, minf=1 00:10:18.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 issued rwts: total=3738,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.286 job3: (groupid=0, jobs=1): err= 0: pid=2709031: Sun Dec 15 15:57:46 2024 00:10:18.286 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:18.286 slat (nsec): min=8489, max=24699, avg=9141.28, stdev=782.66 00:10:18.286 clat (usec): min=74, max=202, avg=93.28, stdev=12.84 00:10:18.286 lat (usec): min=83, max=210, avg=102.42, stdev=12.87 00:10:18.286 clat percentiles (usec): 00:10:18.286 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:10:18.286 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:10:18.286 | 70.00th=[ 94], 80.00th=[ 98], 90.00th=[ 113], 95.00th=[ 125], 00:10:18.286 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 157], 99.95th=[ 165], 00:10:18.286 | 99.99th=[ 202] 00:10:18.286 write: IOPS=5032, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1001msec); 0 zone resets 00:10:18.286 slat (nsec): min=10468, max=42769, avg=11531.50, stdev=1073.22 00:10:18.286 clat (usec): min=68, max=157, avg=88.43, stdev=11.59 00:10:18.286 lat (usec): min=81, max=169, avg=99.96, stdev=11.62 00:10:18.286 clat percentiles (usec): 00:10:18.286 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:10:18.286 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 88], 00:10:18.286 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 117], 00:10:18.286 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 143], 99.95th=[ 155], 00:10:18.286 | 99.99th=[ 159] 00:10:18.286 bw ( KiB/s): min=20480, max=20480, per=27.43%, avg=20480.00, stdev= 0.00, samples=1 00:10:18.286 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:18.286 lat (usec) : 100=85.67%, 250=14.33% 00:10:18.286 cpu : usr=7.40%, sys=13.20%, ctx=9646, majf=0, minf=2 00:10:18.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.286 issued rwts: total=4608,5038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.286 00:10:18.286 Run status group 0 (all jobs): 00:10:18.286 READ: bw=67.8MiB/s (71.1MB/s), 14.0MiB/s-21.3MiB/s (14.7MB/s-22.3MB/s), io=67.9MiB (71.2MB), run=1001-1001msec 00:10:18.286 WRITE: bw=72.9MiB/s (76.5MB/s), 15.3MiB/s-22.0MiB/s (16.0MB/s-23.0MB/s), io=73.0MiB (76.5MB), run=1001-1001msec 00:10:18.286 00:10:18.286 Disk stats (read/write): 00:10:18.286 nvme0n1: ios=4619/4608, merge=0/0, ticks=346/288, in_queue=634, util=83.95% 00:10:18.286 nvme0n2: ios=3072/3144, merge=0/0, ticks=344/326, in_queue=670, util=84.87% 00:10:18.286 nvme0n3: ios=3072/3476, merge=0/0, ticks=330/356, in_queue=686, util=88.30% 00:10:18.286 nvme0n4: ios=3842/4096, merge=0/0, ticks=333/328, in_queue=661, util=89.44% 00:10:18.286 15:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:18.286 [global] 00:10:18.286 thread=1 00:10:18.286 invalidate=1 00:10:18.286 rw=write 00:10:18.286 time_based=1 00:10:18.286 runtime=1 00:10:18.286 ioengine=libaio 00:10:18.286 direct=1 00:10:18.286 bs=4096 00:10:18.286 iodepth=128 00:10:18.286 norandommap=0 00:10:18.286 numjobs=1 00:10:18.286 00:10:18.286 verify_dump=1 00:10:18.286 verify_backlog=512 00:10:18.286 verify_state_save=0 00:10:18.286 do_verify=1 00:10:18.286 verify=crc32c-intel 00:10:18.286 [job0] 00:10:18.286 filename=/dev/nvme0n1 00:10:18.286 [job1] 00:10:18.286 filename=/dev/nvme0n2 00:10:18.286 [job2] 00:10:18.286 filename=/dev/nvme0n3 00:10:18.286 [job3] 00:10:18.286 filename=/dev/nvme0n4 00:10:18.286 Could not set queue depth (nvme0n1) 00:10:18.286 Could not set queue depth (nvme0n2) 00:10:18.286 Could not set queue depth (nvme0n3) 00:10:18.286 Could not set queue depth (nvme0n4) 00:10:18.547 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.547 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.547 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.547 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.547 fio-3.35 00:10:18.547 Starting 4 threads 00:10:19.920 00:10:19.920 job0: (groupid=0, jobs=1): err= 0: pid=2709430: Sun Dec 15 15:57:48 2024 00:10:19.920 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:19.920 slat (usec): min=2, max=1730, avg=135.37, stdev=316.03 00:10:19.920 clat (usec): min=15259, max=19162, avg=17442.88, stdev=495.67 00:10:19.920 lat (usec): min=16206, max=19302, avg=17578.24, stdev=399.01 00:10:19.920 clat percentiles (usec): 00:10:19.920 | 1.00th=[16057], 5.00th=[16581], 10.00th=[16712], 20.00th=[17171], 00:10:19.920 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:19.921 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:10:19.921 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:10:19.921 | 99.99th=[19268] 00:10:19.921 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1004msec); 0 zone resets 00:10:19.921 slat (usec): min=2, max=1603, avg=126.61, stdev=296.01 00:10:19.921 clat (usec): min=1922, max=18471, avg=16265.42, stdev=1421.17 00:10:19.921 lat (usec): min=3156, max=18876, avg=16392.03, stdev=1392.33 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[ 7242], 5.00th=[15401], 10.00th=[15795], 20.00th=[16188], 00:10:19.921 | 30.00th=[16188], 40.00th=[16319], 50.00th=[16450], 60.00th=[16581], 00:10:19.921 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:10:19.921 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:10:19.921 | 99.99th=[18482] 00:10:19.921 bw ( KiB/s): min=14144, max=16384, per=17.49%, avg=15264.00, stdev=1583.92, samples=2 00:10:19.921 iops : min= 3536, max= 4096, avg=3816.00, stdev=395.98, samples=2 00:10:19.921 lat (msec) : 2=0.01%, 4=0.15%, 10=0.56%, 20=99.28% 00:10:19.921 cpu : usr=2.29%, sys=3.89%, ctx=1344, majf=0, minf=1 00:10:19.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.921 issued rwts: total=3584,3943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.921 job1: (groupid=0, jobs=1): err= 0: pid=2709447: Sun Dec 15 15:57:48 2024 00:10:19.921 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:19.921 slat (usec): min=2, max=2541, avg=135.40, stdev=398.84 00:10:19.921 clat (usec): min=15182, max=18883, avg=17455.31, stdev=553.03 00:10:19.921 lat (usec): min=16422, max=18886, avg=17590.72, stdev=392.16 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[15795], 5.00th=[16188], 10.00th=[16712], 20.00th=[17171], 00:10:19.921 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:19.921 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:10:19.921 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:19.921 | 99.99th=[19006] 00:10:19.921 write: IOPS=3928, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:10:19.921 slat (usec): min=2, max=1632, avg=126.93, stdev=373.27 00:10:19.921 clat (usec): min=2393, max=19033, avg=16275.53, stdev=1351.46 00:10:19.921 lat (usec): min=3745, max=19044, avg=16402.46, stdev=1299.46 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[ 7898], 5.00th=[15008], 10.00th=[15533], 20.00th=[16188], 00:10:19.921 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16450], 60.00th=[16581], 00:10:19.921 | 70.00th=[16712], 80.00th=[16712], 90.00th=[16909], 95.00th=[17433], 00:10:19.921 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:10:19.921 | 99.99th=[19006] 00:10:19.921 bw ( KiB/s): min=14120, max=16384, per=17.47%, avg=15252.00, stdev=1600.89, samples=2 00:10:19.921 iops : min= 3530, max= 4096, avg=3813.00, stdev=400.22, samples=2 00:10:19.921 lat (msec) : 4=0.12%, 10=0.57%, 20=99.31% 00:10:19.921 cpu : usr=2.20%, sys=2.99%, ctx=775, majf=0, minf=1 00:10:19.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.921 issued rwts: total=3584,3940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.921 job2: (groupid=0, jobs=1): err= 0: pid=2709467: Sun Dec 15 15:57:48 2024 00:10:19.921 read: IOPS=9718, BW=38.0MiB/s (39.8MB/s)(38.0MiB/1001msec) 00:10:19.921 slat (usec): min=2, max=1899, avg=49.83, stdev=180.48 00:10:19.921 clat (usec): min=3378, max=9734, avg=6576.97, stdev=348.52 00:10:19.921 lat (usec): min=3385, max=9742, avg=6626.80, stdev=336.67 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:10:19.921 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6652], 00:10:19.921 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6849], 95.00th=[ 7046], 00:10:19.921 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8848], 99.95th=[ 9765], 00:10:19.921 | 99.99th=[ 9765] 00:10:19.921 write: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(39.4MiB/1001msec); 0 zone resets 00:10:19.921 slat (usec): min=2, max=1279, avg=47.39, stdev=166.60 00:10:19.921 clat (usec): min=462, max=7731, avg=6226.38, stdev=439.81 00:10:19.921 lat (usec): min=1191, max=7734, avg=6273.77, stdev=427.20 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6063], 00:10:19.921 | 30.00th=[ 6128], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6325], 00:10:19.921 | 70.00th=[ 6390], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6718], 00:10:19.921 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 7504], 99.95th=[ 7701], 00:10:19.921 | 99.99th=[ 7701] 00:10:19.921 bw ( KiB/s): min=40960, max=40960, per=46.92%, avg=40960.00, stdev= 0.00, samples=1 00:10:19.921 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=1 00:10:19.921 lat (usec) : 500=0.01% 00:10:19.921 lat (msec) : 2=0.11%, 4=0.25%, 10=99.64% 00:10:19.921 cpu : usr=3.30%, sys=9.50%, ctx=1321, majf=0, minf=1 00:10:19.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:19.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.921 issued rwts: total=9728,10082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.921 job3: (groupid=0, jobs=1): err= 0: pid=2709474: Sun Dec 15 15:57:48 2024 00:10:19.921 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:19.921 slat (usec): min=2, max=1731, avg=135.49, stdev=317.81 00:10:19.921 clat (usec): min=15217, max=19159, avg=17445.73, stdev=497.38 00:10:19.921 lat (usec): min=16263, max=19240, avg=17581.23, stdev=397.16 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[16057], 5.00th=[16581], 10.00th=[16712], 20.00th=[17171], 00:10:19.921 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:19.921 | 70.00th=[17695], 80.00th=[17957], 90.00th=[17957], 95.00th=[18220], 00:10:19.921 | 99.00th=[18482], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:10:19.921 | 99.99th=[19268] 00:10:19.921 write: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:10:19.921 slat (usec): min=2, max=1630, avg=126.46, stdev=297.94 00:10:19.921 clat (usec): min=1861, max=18399, avg=16254.69, stdev=1459.36 00:10:19.921 lat (usec): min=3134, max=18403, avg=16381.16, stdev=1432.46 00:10:19.921 clat percentiles (usec): 00:10:19.921 | 1.00th=[ 7177], 5.00th=[15401], 10.00th=[15795], 20.00th=[16188], 00:10:19.921 | 30.00th=[16319], 40.00th=[16319], 50.00th=[16450], 60.00th=[16581], 00:10:19.921 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:10:19.921 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:10:19.921 | 99.99th=[18482] 00:10:19.921 bw ( KiB/s): min=14168, max=16384, per=17.50%, avg=15276.00, stdev=1566.95, samples=2 00:10:19.921 iops : min= 3542, max= 4096, avg=3819.00, stdev=391.74, samples=2 00:10:19.921 lat (msec) : 2=0.01%, 4=0.17%, 10=0.57%, 20=99.24% 00:10:19.921 cpu : usr=1.80%, sys=4.49%, ctx=1325, majf=0, minf=1 00:10:19.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.921 issued rwts: total=3584,3946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.921 00:10:19.921 Run status group 0 (all jobs): 00:10:19.921 READ: bw=79.7MiB/s (83.6MB/s), 13.9MiB/s-38.0MiB/s (14.6MB/s-39.8MB/s), io=80.0MiB (83.9MB), run=1001-1004msec 00:10:19.921 WRITE: bw=85.2MiB/s (89.4MB/s), 15.3MiB/s-39.3MiB/s (16.1MB/s-41.3MB/s), io=85.6MiB (89.7MB), run=1001-1004msec 00:10:19.921 00:10:19.921 Disk stats (read/write): 00:10:19.921 nvme0n1: ios=3121/3174, merge=0/0, ticks=13360/12943, in_queue=26303, util=84.45% 00:10:19.921 nvme0n2: ios=3072/3171, merge=0/0, ticks=13407/12855, in_queue=26262, util=85.39% 00:10:19.921 nvme0n3: ios=8192/8290, merge=0/0, ticks=14416/13811, in_queue=28227, util=88.36% 00:10:19.921 nvme0n4: ios=3072/3177, merge=0/0, ticks=13390/12917, in_queue=26307, util=89.51% 00:10:19.921 15:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:19.921 [global] 00:10:19.921 thread=1 00:10:19.921 invalidate=1 00:10:19.921 rw=randwrite 00:10:19.921 time_based=1 00:10:19.921 runtime=1 00:10:19.921 ioengine=libaio 00:10:19.921 direct=1 00:10:19.921 bs=4096 00:10:19.921 iodepth=128 00:10:19.921 norandommap=0 00:10:19.921 numjobs=1 00:10:19.921 00:10:19.921 verify_dump=1 00:10:19.921 verify_backlog=512 00:10:19.921 verify_state_save=0 00:10:19.921 do_verify=1 00:10:19.921 verify=crc32c-intel 00:10:19.921 [job0] 00:10:19.921 filename=/dev/nvme0n1 00:10:19.921 [job1] 00:10:19.921 filename=/dev/nvme0n2 00:10:19.921 [job2] 00:10:19.921 filename=/dev/nvme0n3 00:10:19.921 [job3] 00:10:19.921 filename=/dev/nvme0n4 00:10:19.921 Could not set queue depth (nvme0n1) 00:10:19.921 Could not set queue depth (nvme0n2) 00:10:19.921 Could not set queue depth (nvme0n3) 00:10:19.921 Could not set queue depth (nvme0n4) 00:10:20.179 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.179 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.179 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.179 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.179 fio-3.35 00:10:20.179 Starting 4 threads 00:10:21.554 00:10:21.554 job0: (groupid=0, jobs=1): err= 0: pid=2709865: Sun Dec 15 15:57:49 2024 00:10:21.554 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:21.554 slat (usec): min=2, max=3842, avg=116.31, stdev=455.90 00:10:21.554 clat (usec): min=8987, max=19737, avg=15280.85, stdev=3644.75 00:10:21.554 lat (usec): min=9416, max=19746, avg=15397.16, stdev=3643.50 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11076], 00:10:21.554 | 30.00th=[11731], 40.00th=[12256], 50.00th=[17957], 60.00th=[18482], 00:10:21.554 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19268], 00:10:21.554 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:10:21.554 | 99.99th=[19792] 00:10:21.554 write: IOPS=4311, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1003msec); 0 zone resets 00:10:21.554 slat (usec): min=2, max=3905, avg=117.11, stdev=461.99 00:10:21.554 clat (usec): min=1713, max=19557, avg=14853.22, stdev=3961.29 00:10:21.554 lat (usec): min=4387, max=19568, avg=14970.33, stdev=3964.64 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10290], 20.00th=[10552], 00:10:21.554 | 30.00th=[11469], 40.00th=[11731], 50.00th=[15270], 60.00th=[18482], 00:10:21.554 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19006], 95.00th=[19268], 00:10:21.554 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:21.554 | 99.99th=[19530] 00:10:21.554 bw ( KiB/s): min=13088, max=20480, per=18.83%, avg=16784.00, stdev=5226.93, samples=2 00:10:21.554 iops : min= 3272, max= 5120, avg=4196.00, stdev=1306.73, samples=2 00:10:21.554 lat (msec) : 2=0.01%, 10=2.42%, 20=97.57% 00:10:21.554 cpu : usr=2.79%, sys=3.49%, ctx=1834, majf=0, minf=1 00:10:21.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.554 issued rwts: total=4096,4324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.554 job1: (groupid=0, jobs=1): err= 0: pid=2709884: Sun Dec 15 15:57:49 2024 00:10:21.554 read: IOPS=9718, BW=38.0MiB/s (39.8MB/s)(38.0MiB/1001msec) 00:10:21.554 slat (usec): min=2, max=1403, avg=50.57, stdev=172.18 00:10:21.554 clat (usec): min=4725, max=12885, avg=6600.08, stdev=2355.25 00:10:21.554 lat (usec): min=4728, max=12896, avg=6650.65, stdev=2373.18 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[ 4948], 5.00th=[ 5145], 10.00th=[ 5211], 20.00th=[ 5276], 00:10:21.554 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:10:21.554 | 70.00th=[ 5735], 80.00th=[ 6259], 90.00th=[11207], 95.00th=[11994], 00:10:21.554 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12780], 99.95th=[12911], 00:10:21.554 | 99.99th=[12911] 00:10:21.554 write: IOPS=9824, BW=38.4MiB/s (40.2MB/s)(38.4MiB/1001msec); 0 zone resets 00:10:21.554 slat (usec): min=2, max=1371, avg=48.14, stdev=162.06 00:10:21.554 clat (usec): min=414, max=12248, avg=6342.65, stdev=2393.80 00:10:21.554 lat (usec): min=1043, max=12251, avg=6390.79, stdev=2411.25 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5014], 00:10:21.554 | 30.00th=[ 5080], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:10:21.554 | 70.00th=[ 5473], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:10:21.554 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:10:21.554 | 99.99th=[12256] 00:10:21.554 bw ( KiB/s): min=31392, max=31392, per=35.22%, avg=31392.00, stdev= 0.00, samples=1 00:10:21.554 iops : min= 7848, max= 7848, avg=7848.00, stdev= 0.00, samples=1 00:10:21.554 lat (usec) : 500=0.01% 00:10:21.554 lat (msec) : 2=0.16%, 4=0.27%, 10=80.28%, 20=19.28% 00:10:21.554 cpu : usr=5.20%, sys=6.80%, ctx=1898, majf=0, minf=2 00:10:21.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.554 issued rwts: total=9728,9834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.554 job2: (groupid=0, jobs=1): err= 0: pid=2709905: Sun Dec 15 15:57:49 2024 00:10:21.554 read: IOPS=3880, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:10:21.554 slat (usec): min=2, max=3635, avg=128.55, stdev=516.24 00:10:21.554 clat (usec): min=2564, max=21444, avg=16342.21, stdev=3107.00 00:10:21.554 lat (usec): min=6080, max=21453, avg=16470.76, stdev=3084.81 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[10683], 5.00th=[12256], 10.00th=[12256], 20.00th=[13042], 00:10:21.554 | 30.00th=[13566], 40.00th=[14222], 50.00th=[18482], 60.00th=[18744], 00:10:21.554 | 70.00th=[19006], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:10:21.554 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:10:21.554 | 99.99th=[21365] 00:10:21.554 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:21.554 slat (usec): min=2, max=3463, avg=117.78, stdev=464.98 00:10:21.554 clat (usec): min=9522, max=19671, avg=15431.42, stdev=3155.49 00:10:21.554 lat (usec): min=9563, max=19681, avg=15549.20, stdev=3145.24 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11600], 20.00th=[11994], 00:10:21.554 | 30.00th=[12518], 40.00th=[12911], 50.00th=[16909], 60.00th=[18220], 00:10:21.554 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:10:21.554 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:21.554 | 99.99th=[19792] 00:10:21.554 bw ( KiB/s): min=13408, max=19360, per=18.38%, avg=16384.00, stdev=4208.70, samples=2 00:10:21.554 iops : min= 3352, max= 4840, avg=4096.00, stdev=1052.17, samples=2 00:10:21.554 lat (msec) : 4=0.01%, 10=0.41%, 20=99.20%, 50=0.38% 00:10:21.554 cpu : usr=2.40%, sys=3.99%, ctx=1783, majf=0, minf=1 00:10:21.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.554 issued rwts: total=3892,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.554 job3: (groupid=0, jobs=1): err= 0: pid=2709911: Sun Dec 15 15:57:49 2024 00:10:21.554 read: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec) 00:10:21.554 slat (usec): min=2, max=4136, avg=126.96, stdev=582.95 00:10:21.554 clat (usec): min=2641, max=21418, avg=16257.99, stdev=3312.05 00:10:21.554 lat (usec): min=3157, max=21426, avg=16384.94, stdev=3285.25 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[ 8029], 5.00th=[11731], 10.00th=[12125], 20.00th=[12780], 00:10:21.554 | 30.00th=[13566], 40.00th=[14222], 50.00th=[18744], 60.00th=[18744], 00:10:21.554 | 70.00th=[19006], 80.00th=[19006], 90.00th=[19268], 95.00th=[19268], 00:10:21.554 | 99.00th=[19530], 99.50th=[19530], 99.90th=[21365], 99.95th=[21365], 00:10:21.554 | 99.99th=[21365] 00:10:21.554 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:21.554 slat (usec): min=2, max=4057, avg=117.01, stdev=531.11 00:10:21.554 clat (usec): min=9308, max=19400, avg=15340.96, stdev=3191.37 00:10:21.554 lat (usec): min=9323, max=19409, avg=15457.97, stdev=3170.13 00:10:21.554 clat percentiles (usec): 00:10:21.554 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11338], 20.00th=[11863], 00:10:21.554 | 30.00th=[12518], 40.00th=[12911], 50.00th=[15664], 60.00th=[18220], 00:10:21.554 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:10:21.554 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:10:21.554 | 99.99th=[19530] 00:10:21.554 bw ( KiB/s): min=13280, max=19488, per=18.38%, avg=16384.00, stdev=4389.72, samples=2 00:10:21.554 iops : min= 3320, max= 4872, avg=4096.00, stdev=1097.43, samples=2 00:10:21.554 lat (msec) : 4=0.32%, 10=0.46%, 20=99.15%, 50=0.06% 00:10:21.554 cpu : usr=3.09%, sys=3.99%, ctx=625, majf=0, minf=1 00:10:21.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.554 issued rwts: total=3933,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.554 00:10:21.554 Run status group 0 (all jobs): 00:10:21.554 READ: bw=84.3MiB/s (88.4MB/s), 15.2MiB/s-38.0MiB/s (15.9MB/s-39.8MB/s), io=84.6MiB (88.7MB), run=1001-1003msec 00:10:21.554 WRITE: bw=87.0MiB/s (91.3MB/s), 16.0MiB/s-38.4MiB/s (16.7MB/s-40.2MB/s), io=87.3MiB (91.5MB), run=1001-1003msec 00:10:21.554 00:10:21.554 Disk stats (read/write): 00:10:21.554 nvme0n1: ios=3633/3683, merge=0/0, ticks=13199/13222, in_queue=26421, util=84.27% 00:10:21.555 nvme0n2: ios=7680/7894, merge=0/0, ticks=13307/12858, in_queue=26165, util=85.20% 00:10:21.555 nvme0n3: ios=3251/3584, merge=0/0, ticks=12990/13266, in_queue=26256, util=88.45% 00:10:21.555 nvme0n4: ios=3267/3584, merge=0/0, ticks=12888/13039, in_queue=25927, util=89.50% 00:10:21.555 15:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:21.555 15:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2710046 00:10:21.555 15:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:21.555 15:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:21.555 [global] 00:10:21.555 thread=1 00:10:21.555 invalidate=1 00:10:21.555 rw=read 00:10:21.555 time_based=1 00:10:21.555 runtime=10 00:10:21.555 ioengine=libaio 00:10:21.555 direct=1 00:10:21.555 bs=4096 00:10:21.555 iodepth=1 00:10:21.555 norandommap=1 00:10:21.555 numjobs=1 00:10:21.555 00:10:21.555 [job0] 00:10:21.555 filename=/dev/nvme0n1 00:10:21.555 [job1] 00:10:21.555 filename=/dev/nvme0n2 00:10:21.555 [job2] 00:10:21.555 filename=/dev/nvme0n3 00:10:21.555 [job3] 00:10:21.555 filename=/dev/nvme0n4 00:10:21.555 Could not set queue depth (nvme0n1) 00:10:21.555 Could not set queue depth (nvme0n2) 00:10:21.555 Could not set queue depth (nvme0n3) 00:10:21.555 Could not set queue depth (nvme0n4) 00:10:21.812 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.812 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.812 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.812 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.812 fio-3.35 00:10:21.812 Starting 4 threads 00:10:24.334 15:57:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:24.592 15:57:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:24.592 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=67055616, buflen=4096 00:10:24.592 fio: pid=2710346, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:24.592 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=114880512, buflen=4096 00:10:24.592 fio: pid=2710338, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:24.592 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.592 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:24.849 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16084992, buflen=4096 00:10:24.850 fio: pid=2710308, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:24.850 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.850 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:25.107 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=26693632, buflen=4096 00:10:25.107 fio: pid=2710325, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.107 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.107 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:25.107 00:10:25.107 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2710308: Sun Dec 15 15:57:53 2024 00:10:25.107 read: IOPS=6688, BW=26.1MiB/s (27.4MB/s)(79.3MiB/3037msec) 00:10:25.107 slat (usec): min=6, max=16503, avg=11.71, stdev=170.29 00:10:25.107 clat (usec): min=50, max=8604, avg=135.42, stdev=67.46 00:10:25.107 lat (usec): min=58, max=16578, avg=147.13, stdev=182.47 00:10:25.107 clat percentiles (usec): 00:10:25.107 | 1.00th=[ 59], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 93], 00:10:25.107 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:10:25.107 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:10:25.107 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 225], 00:10:25.107 | 99.99th=[ 392] 00:10:25.107 bw ( KiB/s): min=24752, max=25320, per=23.33%, avg=25019.20, stdev=228.43, samples=5 00:10:25.107 iops : min= 6188, max= 6330, avg=6254.80, stdev=57.11, samples=5 00:10:25.107 lat (usec) : 100=20.51%, 250=79.46%, 500=0.02% 00:10:25.107 lat (msec) : 10=0.01% 00:10:25.107 cpu : usr=3.03%, sys=9.98%, ctx=20317, majf=0, minf=1 00:10:25.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.107 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.107 issued rwts: total=20312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.107 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2710325: Sun Dec 15 15:57:53 2024 00:10:25.107 read: IOPS=7005, BW=27.4MiB/s (28.7MB/s)(89.5MiB/3269msec) 00:10:25.108 slat (usec): min=8, max=15857, avg=11.58, stdev=166.48 00:10:25.108 clat (usec): min=33, max=21647, avg=129.01, stdev=204.09 00:10:25.108 lat (usec): min=55, max=21656, avg=140.59, stdev=263.08 00:10:25.108 clat percentiles (usec): 00:10:25.108 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 79], 00:10:25.108 | 30.00th=[ 124], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:10:25.108 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 165], 00:10:25.108 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 212], 99.95th=[ 229], 00:10:25.108 | 99.99th=[ 396] 00:10:25.108 bw ( KiB/s): min=25056, max=32784, per=24.70%, avg=26480.00, stdev=3091.20, samples=6 00:10:25.108 iops : min= 6264, max= 8196, avg=6620.00, stdev=772.80, samples=6 00:10:25.108 lat (usec) : 50=0.16%, 100=26.15%, 250=73.65%, 500=0.02% 00:10:25.108 lat (msec) : 50=0.01% 00:10:25.108 cpu : usr=2.97%, sys=10.25%, ctx=22910, majf=0, minf=2 00:10:25.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 issued rwts: total=22902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.108 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2710338: Sun Dec 15 15:57:53 2024 00:10:25.108 read: IOPS=9946, BW=38.8MiB/s (40.7MB/s)(110MiB/2820msec) 00:10:25.108 slat (usec): min=8, max=12872, avg= 9.82, stdev=100.25 00:10:25.108 clat (usec): min=57, max=378, avg=88.91, stdev=13.05 00:10:25.108 lat (usec): min=66, max=13003, avg=98.73, stdev=101.43 00:10:25.108 clat percentiles (usec): 00:10:25.108 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:10:25.108 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:10:25.108 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 113], 00:10:25.108 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 167], 99.95th=[ 178], 00:10:25.108 | 99.99th=[ 221] 00:10:25.108 bw ( KiB/s): min=39856, max=41560, per=38.41%, avg=41185.60, stdev=743.92, samples=5 00:10:25.108 iops : min= 9964, max=10390, avg=10296.40, stdev=185.98, samples=5 00:10:25.108 lat (usec) : 100=92.58%, 250=7.41%, 500=0.01% 00:10:25.108 cpu : usr=4.54%, sys=14.19%, ctx=28050, majf=0, minf=2 00:10:25.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 issued rwts: total=28048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.108 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2710346: Sun Dec 15 15:57:53 2024 00:10:25.108 read: IOPS=6192, BW=24.2MiB/s (25.4MB/s)(63.9MiB/2644msec) 00:10:25.108 slat (nsec): min=8377, max=45532, avg=10542.84, stdev=2661.64 00:10:25.108 clat (usec): min=68, max=420, avg=148.12, stdev=15.35 00:10:25.108 lat (usec): min=87, max=438, avg=158.67, stdev=15.35 00:10:25.108 clat percentiles (usec): 00:10:25.108 | 1.00th=[ 94], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:10:25.108 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:10:25.108 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:10:25.108 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 235], 00:10:25.108 | 99.99th=[ 412] 00:10:25.108 bw ( KiB/s): min=24808, max=25032, per=23.28%, avg=24958.40, stdev=91.46, samples=5 00:10:25.108 iops : min= 6202, max= 6258, avg=6239.60, stdev=22.86, samples=5 00:10:25.108 lat (usec) : 100=1.61%, 250=98.35%, 500=0.04% 00:10:25.108 cpu : usr=3.48%, sys=8.89%, ctx=16372, majf=0, minf=2 00:10:25.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.108 issued rwts: total=16372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.108 00:10:25.108 Run status group 0 (all jobs): 00:10:25.108 READ: bw=105MiB/s (110MB/s), 24.2MiB/s-38.8MiB/s (25.4MB/s-40.7MB/s), io=342MiB (359MB), run=2644-3269msec 00:10:25.108 00:10:25.108 Disk stats (read/write): 00:10:25.108 nvme0n1: ios=18345/0, merge=0/0, ticks=2428/0, in_queue=2428, util=93.49% 00:10:25.108 nvme0n2: ios=20523/0, merge=0/0, ticks=2658/0, in_queue=2658, util=93.96% 00:10:25.108 nvme0n3: ios=26274/0, merge=0/0, ticks=2066/0, in_queue=2066, util=96.03% 00:10:25.108 nvme0n4: ios=16132/0, merge=0/0, ticks=2265/0, in_queue=2265, util=96.46% 00:10:25.365 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.365 15:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:25.622 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.622 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:25.879 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.879 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:25.880 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.880 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:26.137 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:26.137 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2710046 00:10:26.137 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:26.137 15:57:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:27.070 nvmf hotplug test: fio failed as expected 00:10:27.070 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:27.327 rmmod nvme_rdma 00:10:27.327 rmmod nvme_fabrics 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 2707194 ']' 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 2707194 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2707194 ']' 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2707194 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.327 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2707194 00:10:27.585 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.585 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.585 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2707194' 00:10:27.585 killing process with pid 2707194 00:10:27.585 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2707194 00:10:27.585 15:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2707194 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:27.843 00:10:27.843 real 0m26.490s 00:10:27.843 user 2m7.204s 00:10:27.843 sys 0m10.218s 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.843 ************************************ 00:10:27.843 END TEST nvmf_fio_target 00:10:27.843 ************************************ 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.843 ************************************ 00:10:27.843 START TEST nvmf_bdevio 00:10:27.843 ************************************ 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:27.843 * Looking for test storage... 00:10:27.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:27.843 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.101 --rc genhtml_branch_coverage=1 00:10:28.101 --rc genhtml_function_coverage=1 00:10:28.101 --rc genhtml_legend=1 00:10:28.101 --rc geninfo_all_blocks=1 00:10:28.101 --rc geninfo_unexecuted_blocks=1 00:10:28.101 00:10:28.101 ' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.101 --rc genhtml_branch_coverage=1 00:10:28.101 --rc genhtml_function_coverage=1 00:10:28.101 --rc genhtml_legend=1 00:10:28.101 --rc geninfo_all_blocks=1 00:10:28.101 --rc geninfo_unexecuted_blocks=1 00:10:28.101 00:10:28.101 ' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.101 --rc genhtml_branch_coverage=1 00:10:28.101 --rc genhtml_function_coverage=1 00:10:28.101 --rc genhtml_legend=1 00:10:28.101 --rc geninfo_all_blocks=1 00:10:28.101 --rc geninfo_unexecuted_blocks=1 00:10:28.101 00:10:28.101 ' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.101 --rc genhtml_branch_coverage=1 00:10:28.101 --rc genhtml_function_coverage=1 00:10:28.101 --rc genhtml_legend=1 00:10:28.101 --rc geninfo_all_blocks=1 00:10:28.101 --rc geninfo_unexecuted_blocks=1 00:10:28.101 00:10:28.101 ' 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.101 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.102 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.102 15:57:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.658 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:34.659 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:34.659 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:34.659 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:34.659 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # rdma_device_init 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:34.659 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.659 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:34.659 altname enp217s0f0np0 00:10:34.659 altname ens818f0np0 00:10:34.659 inet 192.168.100.8/24 scope global mlx_0_0 00:10:34.659 valid_lft forever preferred_lft forever 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:34.659 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.659 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:34.659 altname enp217s0f1np1 00:10:34.659 altname ens818f1np1 00:10:34.659 inet 192.168.100.9/24 scope global mlx_0_1 00:10:34.659 valid_lft forever preferred_lft forever 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.659 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:34.660 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:34.660 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.919 192.168.100.9' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:34.919 192.168.100.9' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # head -n 1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:34.919 192.168.100.9' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # head -n 1 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # tail -n +2 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=2714721 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 2714721 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2714721 ']' 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.919 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.919 [2024-12-15 15:58:03.373526] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:34.919 [2024-12-15 15:58:03.373583] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.919 [2024-12-15 15:58:03.443768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.919 [2024-12-15 15:58:03.482890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.919 [2024-12-15 15:58:03.482948] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.919 [2024-12-15 15:58:03.482958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.919 [2024-12-15 15:58:03.482967] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.919 [2024-12-15 15:58:03.482974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.919 [2024-12-15 15:58:03.483116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:34.919 [2024-12-15 15:58:03.483205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:34.919 [2024-12-15 15:58:03.483290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.919 [2024-12-15 15:58:03.483292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.178 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 [2024-12-15 15:58:03.667226] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x579740/0x57dc30) succeed. 00:10:35.178 [2024-12-15 15:58:03.677723] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x57ad80/0x5bf2d0) succeed. 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 Malloc0 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 [2024-12-15 15:58:03.846448] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:35.436 { 00:10:35.436 "params": { 00:10:35.436 "name": "Nvme$subsystem", 00:10:35.436 "trtype": "$TEST_TRANSPORT", 00:10:35.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.436 "adrfam": "ipv4", 00:10:35.436 "trsvcid": "$NVMF_PORT", 00:10:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.436 "hdgst": ${hdgst:-false}, 00:10:35.436 "ddgst": ${ddgst:-false} 00:10:35.436 }, 00:10:35.436 "method": "bdev_nvme_attach_controller" 00:10:35.436 } 00:10:35.436 EOF 00:10:35.436 )") 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:35.436 15:58:03 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:35.436 "params": { 00:10:35.436 "name": "Nvme1", 00:10:35.436 "trtype": "rdma", 00:10:35.436 "traddr": "192.168.100.8", 00:10:35.436 "adrfam": "ipv4", 00:10:35.436 "trsvcid": "4420", 00:10:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.436 "hdgst": false, 00:10:35.436 "ddgst": false 00:10:35.436 }, 00:10:35.436 "method": "bdev_nvme_attach_controller" 00:10:35.436 }' 00:10:35.436 [2024-12-15 15:58:03.897953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:35.436 [2024-12-15 15:58:03.897999] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714776 ] 00:10:35.436 [2024-12-15 15:58:03.967135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:35.694 [2024-12-15 15:58:04.007999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.694 [2024-12-15 15:58:04.008019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.694 [2024-12-15 15:58:04.008021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.694 I/O targets: 00:10:35.694 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:35.694 00:10:35.694 00:10:35.694 CUnit - A unit testing framework for C - Version 2.1-3 00:10:35.694 http://cunit.sourceforge.net/ 00:10:35.694 00:10:35.694 00:10:35.694 Suite: bdevio tests on: Nvme1n1 00:10:35.694 Test: blockdev write read block ...passed 00:10:35.694 Test: blockdev write zeroes read block ...passed 00:10:35.694 Test: blockdev write zeroes read no split ...passed 00:10:35.694 Test: blockdev write zeroes read split ...passed 00:10:35.694 Test: blockdev write zeroes read split partial ...passed 00:10:35.694 Test: blockdev reset ...[2024-12-15 15:58:04.207675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:35.694 [2024-12-15 15:58:04.230227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:35.694 [2024-12-15 15:58:04.257248] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.694 passed 00:10:35.694 Test: blockdev write read 8 blocks ...passed 00:10:35.694 Test: blockdev write read size > 128k ...passed 00:10:35.694 Test: blockdev write read invalid size ...passed 00:10:35.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.694 Test: blockdev write read max offset ...passed 00:10:35.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.694 Test: blockdev writev readv 8 blocks ...passed 00:10:35.694 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.694 Test: blockdev writev readv block ...passed 00:10:35.694 Test: blockdev writev readv size > 128k ...passed 00:10:35.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.694 Test: blockdev comparev and writev ...[2024-12-15 15:58:04.260174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.260838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:35.694 [2024-12-15 15:58:04.260854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:35.694 passed 00:10:35.694 Test: blockdev nvme passthru rw ...passed 00:10:35.694 Test: blockdev nvme passthru vendor specific ...[2024-12-15 15:58:04.261126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:35.694 [2024-12-15 15:58:04.261138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.261187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:35.694 [2024-12-15 15:58:04.261197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.261235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:35.694 [2024-12-15 15:58:04.261245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:35.694 [2024-12-15 15:58:04.261289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:35.694 [2024-12-15 15:58:04.261299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:35.694 passed 00:10:35.952 Test: blockdev nvme admin passthru ...passed 00:10:35.952 Test: blockdev copy ...passed 00:10:35.952 00:10:35.952 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.952 suites 1 1 n/a 0 0 00:10:35.952 tests 23 23 23 0 0 00:10:35.952 asserts 152 152 152 0 n/a 00:10:35.952 00:10:35.952 Elapsed time = 0.172 seconds 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:35.952 rmmod nvme_rdma 00:10:35.952 rmmod nvme_fabrics 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 2714721 ']' 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 2714721 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2714721 ']' 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2714721 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:35.952 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2714721 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2714721' 00:10:36.210 killing process with pid 2714721 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2714721 00:10:36.210 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2714721 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:36.468 00:10:36.468 real 0m8.584s 00:10:36.468 user 0m8.299s 00:10:36.468 sys 0m5.809s 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.468 ************************************ 00:10:36.468 END TEST nvmf_bdevio 00:10:36.468 ************************************ 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.468 00:10:36.468 real 4m8.154s 00:10:36.468 user 10m44.573s 00:10:36.468 sys 1m35.707s 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.468 ************************************ 00:10:36.468 END TEST nvmf_target_core 00:10:36.468 ************************************ 00:10:36.468 15:58:04 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:36.468 15:58:04 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.468 15:58:04 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.468 15:58:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:36.468 ************************************ 00:10:36.468 START TEST nvmf_target_extra 00:10:36.468 ************************************ 00:10:36.468 15:58:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:36.726 * Looking for test storage... 00:10:36.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.726 --rc genhtml_branch_coverage=1 00:10:36.726 --rc genhtml_function_coverage=1 00:10:36.726 --rc genhtml_legend=1 00:10:36.726 --rc geninfo_all_blocks=1 00:10:36.726 --rc geninfo_unexecuted_blocks=1 00:10:36.726 00:10:36.726 ' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.726 --rc genhtml_branch_coverage=1 00:10:36.726 --rc genhtml_function_coverage=1 00:10:36.726 --rc genhtml_legend=1 00:10:36.726 --rc geninfo_all_blocks=1 00:10:36.726 --rc geninfo_unexecuted_blocks=1 00:10:36.726 00:10:36.726 ' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.726 --rc genhtml_branch_coverage=1 00:10:36.726 --rc genhtml_function_coverage=1 00:10:36.726 --rc genhtml_legend=1 00:10:36.726 --rc geninfo_all_blocks=1 00:10:36.726 --rc geninfo_unexecuted_blocks=1 00:10:36.726 00:10:36.726 ' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.726 --rc genhtml_branch_coverage=1 00:10:36.726 --rc genhtml_function_coverage=1 00:10:36.726 --rc genhtml_legend=1 00:10:36.726 --rc geninfo_all_blocks=1 00:10:36.726 --rc geninfo_unexecuted_blocks=1 00:10:36.726 00:10:36.726 ' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.726 15:58:05 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.727 ************************************ 00:10:36.727 START TEST nvmf_example 00:10:36.727 ************************************ 00:10:36.727 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:37.012 * Looking for test storage... 00:10:37.012 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.012 --rc genhtml_branch_coverage=1 00:10:37.012 --rc genhtml_function_coverage=1 00:10:37.012 --rc genhtml_legend=1 00:10:37.012 --rc geninfo_all_blocks=1 00:10:37.012 --rc geninfo_unexecuted_blocks=1 00:10:37.012 00:10:37.012 ' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.012 --rc genhtml_branch_coverage=1 00:10:37.012 --rc genhtml_function_coverage=1 00:10:37.012 --rc genhtml_legend=1 00:10:37.012 --rc geninfo_all_blocks=1 00:10:37.012 --rc geninfo_unexecuted_blocks=1 00:10:37.012 00:10:37.012 ' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.012 --rc genhtml_branch_coverage=1 00:10:37.012 --rc genhtml_function_coverage=1 00:10:37.012 --rc genhtml_legend=1 00:10:37.012 --rc geninfo_all_blocks=1 00:10:37.012 --rc geninfo_unexecuted_blocks=1 00:10:37.012 00:10:37.012 ' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.012 --rc genhtml_branch_coverage=1 00:10:37.012 --rc genhtml_function_coverage=1 00:10:37.012 --rc genhtml_legend=1 00:10:37.012 --rc geninfo_all_blocks=1 00:10:37.012 --rc geninfo_unexecuted_blocks=1 00:10:37.012 00:10:37.012 ' 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.012 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.013 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.013 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:43.569 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:43.569 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:43.569 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:43.569 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # rdma_device_init 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:43.569 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.569 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:43.570 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.570 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:43.570 altname enp217s0f0np0 00:10:43.570 altname ens818f0np0 00:10:43.570 inet 192.168.100.8/24 scope global mlx_0_0 00:10:43.570 valid_lft forever preferred_lft forever 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:43.570 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.570 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:43.570 altname enp217s0f1np1 00:10:43.570 altname ens818f1np1 00:10:43.570 inet 192.168.100.9/24 scope global mlx_0_1 00:10:43.570 valid_lft forever preferred_lft forever 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.570 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:43.828 192.168.100.9' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:43.828 192.168.100.9' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # head -n 1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # head -n 1 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:43.828 192.168.100.9' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # tail -n +2 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2718495 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2718495 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2718495 ']' 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.828 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.760 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:45.017 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:57.210 Initializing NVMe Controllers 00:10:57.210 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.210 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:57.210 Initialization complete. Launching workers. 00:10:57.210 ======================================================== 00:10:57.210 Latency(us) 00:10:57.210 Device Information : IOPS MiB/s Average min max 00:10:57.210 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25546.19 99.79 2504.72 628.63 15040.14 00:10:57.210 ======================================================== 00:10:57.210 Total : 25546.19 99.79 2504.72 628.63 15040.14 00:10:57.210 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:57.210 rmmod nvme_rdma 00:10:57.210 rmmod nvme_fabrics 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 2718495 ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 2718495 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2718495 ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2718495 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718495 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718495' 00:10:57.210 killing process with pid 2718495 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2718495 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2718495 00:10:57.210 nvmf threads initialize successfully 00:10:57.210 bdev subsystem init successfully 00:10:57.210 created a nvmf target service 00:10:57.210 create targets's poll groups done 00:10:57.210 all subsystems of target started 00:10:57.210 nvmf target is running 00:10:57.210 all subsystems of target stopped 00:10:57.210 destroy targets's poll groups done 00:10:57.210 destroyed the nvmf target service 00:10:57.210 bdev subsystem finish successfully 00:10:57.210 nvmf threads destroy successfully 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.210 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.210 00:10:57.210 real 0m19.759s 00:10:57.210 user 0m52.460s 00:10:57.210 sys 0m5.618s 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.210 ************************************ 00:10:57.210 END TEST nvmf_example 00:10:57.210 ************************************ 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.210 ************************************ 00:10:57.210 START TEST nvmf_filesystem 00:10:57.210 ************************************ 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:57.210 * Looking for test storage... 00:10:57.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.210 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.211 --rc genhtml_branch_coverage=1 00:10:57.211 --rc genhtml_function_coverage=1 00:10:57.211 --rc genhtml_legend=1 00:10:57.211 --rc geninfo_all_blocks=1 00:10:57.211 --rc geninfo_unexecuted_blocks=1 00:10:57.211 00:10:57.211 ' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.211 --rc genhtml_branch_coverage=1 00:10:57.211 --rc genhtml_function_coverage=1 00:10:57.211 --rc genhtml_legend=1 00:10:57.211 --rc geninfo_all_blocks=1 00:10:57.211 --rc geninfo_unexecuted_blocks=1 00:10:57.211 00:10:57.211 ' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.211 --rc genhtml_branch_coverage=1 00:10:57.211 --rc genhtml_function_coverage=1 00:10:57.211 --rc genhtml_legend=1 00:10:57.211 --rc geninfo_all_blocks=1 00:10:57.211 --rc geninfo_unexecuted_blocks=1 00:10:57.211 00:10:57.211 ' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.211 --rc genhtml_branch_coverage=1 00:10:57.211 --rc genhtml_function_coverage=1 00:10:57.211 --rc genhtml_legend=1 00:10:57.211 --rc geninfo_all_blocks=1 00:10:57.211 --rc geninfo_unexecuted_blocks=1 00:10:57.211 00:10:57.211 ' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:57.211 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:57.212 #define SPDK_CONFIG_H 00:10:57.212 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:57.212 #define SPDK_CONFIG_APPS 1 00:10:57.212 #define SPDK_CONFIG_ARCH native 00:10:57.212 #undef SPDK_CONFIG_ASAN 00:10:57.212 #undef SPDK_CONFIG_AVAHI 00:10:57.212 #undef SPDK_CONFIG_CET 00:10:57.212 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:57.212 #define SPDK_CONFIG_COVERAGE 1 00:10:57.212 #define SPDK_CONFIG_CROSS_PREFIX 00:10:57.212 #undef SPDK_CONFIG_CRYPTO 00:10:57.212 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:57.212 #undef SPDK_CONFIG_CUSTOMOCF 00:10:57.212 #undef SPDK_CONFIG_DAOS 00:10:57.212 #define SPDK_CONFIG_DAOS_DIR 00:10:57.212 #define SPDK_CONFIG_DEBUG 1 00:10:57.212 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:57.212 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:57.212 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:57.212 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:57.212 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:57.212 #undef SPDK_CONFIG_DPDK_UADK 00:10:57.212 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:57.212 #define SPDK_CONFIG_EXAMPLES 1 00:10:57.212 #undef SPDK_CONFIG_FC 00:10:57.212 #define SPDK_CONFIG_FC_PATH 00:10:57.212 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:57.212 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:57.212 #define SPDK_CONFIG_FSDEV 1 00:10:57.212 #undef SPDK_CONFIG_FUSE 00:10:57.212 #undef SPDK_CONFIG_FUZZER 00:10:57.212 #define SPDK_CONFIG_FUZZER_LIB 00:10:57.212 #undef SPDK_CONFIG_GOLANG 00:10:57.212 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:57.212 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:57.212 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:57.212 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:57.212 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:57.212 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:57.212 #undef SPDK_CONFIG_HAVE_LZ4 00:10:57.212 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:57.212 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:57.212 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:57.212 #define SPDK_CONFIG_IDXD 1 00:10:57.212 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:57.212 #undef SPDK_CONFIG_IPSEC_MB 00:10:57.212 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:57.212 #define SPDK_CONFIG_ISAL 1 00:10:57.212 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:57.212 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:57.212 #define SPDK_CONFIG_LIBDIR 00:10:57.212 #undef SPDK_CONFIG_LTO 00:10:57.212 #define SPDK_CONFIG_MAX_LCORES 128 00:10:57.212 #define SPDK_CONFIG_NVME_CUSE 1 00:10:57.212 #undef SPDK_CONFIG_OCF 00:10:57.212 #define SPDK_CONFIG_OCF_PATH 00:10:57.212 #define SPDK_CONFIG_OPENSSL_PATH 00:10:57.212 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:57.212 #define SPDK_CONFIG_PGO_DIR 00:10:57.212 #undef SPDK_CONFIG_PGO_USE 00:10:57.212 #define SPDK_CONFIG_PREFIX /usr/local 00:10:57.212 #undef SPDK_CONFIG_RAID5F 00:10:57.212 #undef SPDK_CONFIG_RBD 00:10:57.212 #define SPDK_CONFIG_RDMA 1 00:10:57.212 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:57.212 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:57.212 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:57.212 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:57.212 #define SPDK_CONFIG_SHARED 1 00:10:57.212 #undef SPDK_CONFIG_SMA 00:10:57.212 #define SPDK_CONFIG_TESTS 1 00:10:57.212 #undef SPDK_CONFIG_TSAN 00:10:57.212 #define SPDK_CONFIG_UBLK 1 00:10:57.212 #define SPDK_CONFIG_UBSAN 1 00:10:57.212 #undef SPDK_CONFIG_UNIT_TESTS 00:10:57.212 #undef SPDK_CONFIG_URING 00:10:57.212 #define SPDK_CONFIG_URING_PATH 00:10:57.212 #undef SPDK_CONFIG_URING_ZNS 00:10:57.212 #undef SPDK_CONFIG_USDT 00:10:57.212 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:57.212 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:57.212 #undef SPDK_CONFIG_VFIO_USER 00:10:57.212 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:57.212 #define SPDK_CONFIG_VHOST 1 00:10:57.212 #define SPDK_CONFIG_VIRTIO 1 00:10:57.212 #undef SPDK_CONFIG_VTUNE 00:10:57.212 #define SPDK_CONFIG_VTUNE_DIR 00:10:57.212 #define SPDK_CONFIG_WERROR 1 00:10:57.212 #define SPDK_CONFIG_WPDK_DIR 00:10:57.212 #undef SPDK_CONFIG_XNVME 00:10:57.212 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.212 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:57.213 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.214 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2720713 ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2720713 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.10v05e 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.10v05e/tests/target /tmp/spdk.10v05e 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=422735872 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4861693952 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=54214729728 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730590720 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7515860992 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30851833856 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865293312 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=13459456 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12323028992 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346118144 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23089152 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30865043456 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865297408 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=253952 00:10:57.215 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:57.216 * Looking for test storage... 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=54214729728 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9730453504 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.216 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.216 --rc genhtml_branch_coverage=1 00:10:57.216 --rc genhtml_function_coverage=1 00:10:57.216 --rc genhtml_legend=1 00:10:57.216 --rc geninfo_all_blocks=1 00:10:57.216 --rc geninfo_unexecuted_blocks=1 00:10:57.216 00:10:57.216 ' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.216 --rc genhtml_branch_coverage=1 00:10:57.216 --rc genhtml_function_coverage=1 00:10:57.216 --rc genhtml_legend=1 00:10:57.216 --rc geninfo_all_blocks=1 00:10:57.216 --rc geninfo_unexecuted_blocks=1 00:10:57.216 00:10:57.216 ' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.216 --rc genhtml_branch_coverage=1 00:10:57.216 --rc genhtml_function_coverage=1 00:10:57.216 --rc genhtml_legend=1 00:10:57.216 --rc geninfo_all_blocks=1 00:10:57.216 --rc geninfo_unexecuted_blocks=1 00:10:57.216 00:10:57.216 ' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.216 --rc genhtml_branch_coverage=1 00:10:57.216 --rc genhtml_function_coverage=1 00:10:57.216 --rc genhtml_legend=1 00:10:57.216 --rc geninfo_all_blocks=1 00:10:57.216 --rc geninfo_unexecuted_blocks=1 00:10:57.216 00:10:57.216 ' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.216 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.217 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.217 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:03.862 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:03.862 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:03.862 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:03.862 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # rdma_device_init 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.862 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:03.863 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.863 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:03.863 altname enp217s0f0np0 00:11:03.863 altname ens818f0np0 00:11:03.863 inet 192.168.100.8/24 scope global mlx_0_0 00:11:03.863 valid_lft forever preferred_lft forever 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:03.863 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.863 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:03.863 altname enp217s0f1np1 00:11:03.863 altname ens818f1np1 00:11:03.863 inet 192.168.100.9/24 scope global mlx_0_1 00:11:03.863 valid_lft forever preferred_lft forever 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:03.863 192.168.100.9' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:03.863 192.168.100.9' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # head -n 1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:03.863 192.168.100.9' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # tail -n +2 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # head -n 1 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:03.863 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 ************************************ 00:11:03.863 START TEST nvmf_filesystem_no_in_capsule 00:11:03.863 ************************************ 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2724065 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2724065 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2724065 ']' 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.863 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.863 [2024-12-15 15:58:32.109542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:03.863 [2024-12-15 15:58:32.109590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.863 [2024-12-15 15:58:32.182649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.863 [2024-12-15 15:58:32.224091] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.863 [2024-12-15 15:58:32.224129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.863 [2024-12-15 15:58:32.224139] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.864 [2024-12-15 15:58:32.224147] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.864 [2024-12-15 15:58:32.224170] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.864 [2024-12-15 15:58:32.224210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.864 [2024-12-15 15:58:32.224304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.864 [2024-12-15 15:58:32.224388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.864 [2024-12-15 15:58:32.224390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.864 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 [2024-12-15 15:58:32.371565] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:03.864 [2024-12-15 15:58:32.394583] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x710e40/0x715330) succeed. 00:11:03.864 [2024-12-15 15:58:32.405071] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x712480/0x7569d0) succeed. 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 [2024-12-15 15:58:32.643788] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.123 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:04.123 { 00:11:04.123 "name": "Malloc1", 00:11:04.123 "aliases": [ 00:11:04.123 "1f4621d9-9282-40e6-ada2-8bd2a4a36eba" 00:11:04.123 ], 00:11:04.123 "product_name": "Malloc disk", 00:11:04.123 "block_size": 512, 00:11:04.123 "num_blocks": 1048576, 00:11:04.123 "uuid": "1f4621d9-9282-40e6-ada2-8bd2a4a36eba", 00:11:04.123 "assigned_rate_limits": { 00:11:04.123 "rw_ios_per_sec": 0, 00:11:04.123 "rw_mbytes_per_sec": 0, 00:11:04.123 "r_mbytes_per_sec": 0, 00:11:04.124 "w_mbytes_per_sec": 0 00:11:04.124 }, 00:11:04.124 "claimed": true, 00:11:04.124 "claim_type": "exclusive_write", 00:11:04.124 "zoned": false, 00:11:04.124 "supported_io_types": { 00:11:04.124 "read": true, 00:11:04.124 "write": true, 00:11:04.124 "unmap": true, 00:11:04.124 "flush": true, 00:11:04.124 "reset": true, 00:11:04.124 "nvme_admin": false, 00:11:04.124 "nvme_io": false, 00:11:04.124 "nvme_io_md": false, 00:11:04.124 "write_zeroes": true, 00:11:04.124 "zcopy": true, 00:11:04.124 "get_zone_info": false, 00:11:04.124 "zone_management": false, 00:11:04.124 "zone_append": false, 00:11:04.124 "compare": false, 00:11:04.124 "compare_and_write": false, 00:11:04.124 "abort": true, 00:11:04.124 "seek_hole": false, 00:11:04.124 "seek_data": false, 00:11:04.124 "copy": true, 00:11:04.124 "nvme_iov_md": false 00:11:04.124 }, 00:11:04.124 "memory_domains": [ 00:11:04.124 { 00:11:04.124 "dma_device_id": "system", 00:11:04.124 "dma_device_type": 1 00:11:04.124 }, 00:11:04.124 { 00:11:04.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.124 "dma_device_type": 2 00:11:04.124 } 00:11:04.124 ], 00:11:04.124 "driver_specific": {} 00:11:04.124 } 00:11:04.124 ]' 00:11:04.124 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:04.382 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:05.317 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.317 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:05.317 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.317 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:05.317 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.222 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:07.481 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.856 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:08.856 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.856 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.856 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.856 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 ************************************ 00:11:08.856 START TEST filesystem_ext4 00:11:08.856 ************************************ 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.856 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.856 Discarding device blocks: 0/522240 done 00:11:08.856 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.856 Filesystem UUID: 4c5095f6-90f5-4be0-beb5-a6f168a4493a 00:11:08.856 Superblock backups stored on blocks: 00:11:08.856 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.856 00:11:08.856 Allocating group tables: 0/64 done 00:11:08.856 Writing inode tables: 0/64 done 00:11:08.856 Creating journal (8192 blocks): done 00:11:08.856 Writing superblocks and filesystem accounting information: 0/64 done 00:11:08.856 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2724065 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.856 00:11:08.856 real 0m0.201s 00:11:08.856 user 0m0.026s 00:11:08.856 sys 0m0.080s 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 ************************************ 00:11:08.856 END TEST filesystem_ext4 00:11:08.856 ************************************ 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 ************************************ 00:11:08.856 START TEST filesystem_btrfs 00:11:08.856 ************************************ 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:08.856 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:08.857 btrfs-progs v6.8.1 00:11:08.857 See https://btrfs.readthedocs.io for more information. 00:11:08.857 00:11:08.857 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:08.857 NOTE: several default settings have changed in version 5.15, please make sure 00:11:08.857 this does not affect your deployments: 00:11:08.857 - DUP for metadata (-m dup) 00:11:08.857 - enabled no-holes (-O no-holes) 00:11:08.857 - enabled free-space-tree (-R free-space-tree) 00:11:08.857 00:11:08.857 Label: (null) 00:11:08.857 UUID: 779ccab8-8675-4fce-87ff-a6624f323bc2 00:11:08.857 Node size: 16384 00:11:08.857 Sector size: 4096 (CPU page size: 4096) 00:11:08.857 Filesystem size: 510.00MiB 00:11:08.857 Block group profiles: 00:11:08.857 Data: single 8.00MiB 00:11:08.857 Metadata: DUP 32.00MiB 00:11:08.857 System: DUP 8.00MiB 00:11:08.857 SSD detected: yes 00:11:08.857 Zoned device: no 00:11:08.857 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:08.857 Checksum: crc32c 00:11:08.857 Number of devices: 1 00:11:08.857 Devices: 00:11:08.857 ID SIZE PATH 00:11:08.857 1 510.00MiB /dev/nvme0n1p1 00:11:08.857 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:08.857 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2724065 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.115 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.115 00:11:09.115 real 0m0.245s 00:11:09.115 user 0m0.037s 00:11:09.116 sys 0m0.120s 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.116 ************************************ 00:11:09.116 END TEST filesystem_btrfs 00:11:09.116 ************************************ 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.116 ************************************ 00:11:09.116 START TEST filesystem_xfs 00:11:09.116 ************************************ 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.116 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:09.374 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:09.374 = sectsz=512 attr=2, projid32bit=1 00:11:09.374 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:09.374 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:09.374 data = bsize=4096 blocks=130560, imaxpct=25 00:11:09.374 = sunit=0 swidth=0 blks 00:11:09.374 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:09.374 log =internal log bsize=4096 blocks=16384, version=2 00:11:09.374 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:09.374 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.374 Discarding blocks...Done. 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2724065 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.374 00:11:09.374 real 0m0.212s 00:11:09.374 user 0m0.033s 00:11:09.374 sys 0m0.081s 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.374 ************************************ 00:11:09.374 END TEST filesystem_xfs 00:11:09.374 ************************************ 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:09.374 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.310 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.310 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.310 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.310 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2724065 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2724065 ']' 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2724065 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2724065 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2724065' 00:11:10.568 killing process with pid 2724065 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2724065 00:11:10.568 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2724065 00:11:10.827 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:10.827 00:11:10.827 real 0m7.306s 00:11:10.827 user 0m28.458s 00:11:10.827 sys 0m1.214s 00:11:10.827 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.827 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.827 ************************************ 00:11:10.827 END TEST nvmf_filesystem_no_in_capsule 00:11:10.827 ************************************ 00:11:11.085 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:11.085 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.085 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 ************************************ 00:11:11.086 START TEST nvmf_filesystem_in_capsule 00:11:11.086 ************************************ 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2725434 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2725434 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2725434 ']' 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.086 [2024-12-15 15:58:39.486062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:11.086 [2024-12-15 15:58:39.486107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.086 [2024-12-15 15:58:39.558860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.086 [2024-12-15 15:58:39.599185] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.086 [2024-12-15 15:58:39.599224] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.086 [2024-12-15 15:58:39.599234] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.086 [2024-12-15 15:58:39.599242] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.086 [2024-12-15 15:58:39.599248] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.086 [2024-12-15 15:58:39.599298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.086 [2024-12-15 15:58:39.599397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.086 [2024-12-15 15:58:39.599479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.086 [2024-12-15 15:58:39.599481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.344 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.344 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 [2024-12-15 15:58:39.775914] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1189e40/0x118e330) succeed. 00:11:11.345 [2024-12-15 15:58:39.786469] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x118b480/0x11cf9d0) succeed. 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.345 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 Malloc1 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 [2024-12-15 15:58:40.055903] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.603 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:11.603 { 00:11:11.603 "name": "Malloc1", 00:11:11.603 "aliases": [ 00:11:11.603 "ebd948e8-da9a-4581-ab8a-99dd5a02cb49" 00:11:11.603 ], 00:11:11.603 "product_name": "Malloc disk", 00:11:11.603 "block_size": 512, 00:11:11.603 "num_blocks": 1048576, 00:11:11.603 "uuid": "ebd948e8-da9a-4581-ab8a-99dd5a02cb49", 00:11:11.603 "assigned_rate_limits": { 00:11:11.603 "rw_ios_per_sec": 0, 00:11:11.603 "rw_mbytes_per_sec": 0, 00:11:11.603 "r_mbytes_per_sec": 0, 00:11:11.604 "w_mbytes_per_sec": 0 00:11:11.604 }, 00:11:11.604 "claimed": true, 00:11:11.604 "claim_type": "exclusive_write", 00:11:11.604 "zoned": false, 00:11:11.604 "supported_io_types": { 00:11:11.604 "read": true, 00:11:11.604 "write": true, 00:11:11.604 "unmap": true, 00:11:11.604 "flush": true, 00:11:11.604 "reset": true, 00:11:11.604 "nvme_admin": false, 00:11:11.604 "nvme_io": false, 00:11:11.604 "nvme_io_md": false, 00:11:11.604 "write_zeroes": true, 00:11:11.604 "zcopy": true, 00:11:11.604 "get_zone_info": false, 00:11:11.604 "zone_management": false, 00:11:11.604 "zone_append": false, 00:11:11.604 "compare": false, 00:11:11.604 "compare_and_write": false, 00:11:11.604 "abort": true, 00:11:11.604 "seek_hole": false, 00:11:11.604 "seek_data": false, 00:11:11.604 "copy": true, 00:11:11.604 "nvme_iov_md": false 00:11:11.604 }, 00:11:11.604 "memory_domains": [ 00:11:11.604 { 00:11:11.604 "dma_device_id": "system", 00:11:11.604 "dma_device_type": 1 00:11:11.604 }, 00:11:11.604 { 00:11:11.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.604 "dma_device_type": 2 00:11:11.604 } 00:11:11.604 ], 00:11:11.604 "driver_specific": {} 00:11:11.604 } 00:11:11.604 ]' 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:11.604 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:11.862 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:11.862 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:12.795 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.795 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.795 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.795 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:12.795 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:14.701 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:14.959 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.893 ************************************ 00:11:15.893 START TEST filesystem_in_capsule_ext4 00:11:15.893 ************************************ 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:15.893 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:15.893 mke2fs 1.47.0 (5-Feb-2023) 00:11:16.151 Discarding device blocks: 0/522240 done 00:11:16.151 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:16.151 Filesystem UUID: 2ef91b56-2923-47a9-b193-4e8bdee04ad0 00:11:16.151 Superblock backups stored on blocks: 00:11:16.151 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:16.151 00:11:16.151 Allocating group tables: 0/64 done 00:11:16.151 Writing inode tables: 0/64 done 00:11:16.151 Creating journal (8192 blocks): done 00:11:16.151 Writing superblocks and filesystem accounting information: 0/64 done 00:11:16.151 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2725434 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.151 00:11:16.151 real 0m0.199s 00:11:16.151 user 0m0.031s 00:11:16.151 sys 0m0.078s 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:16.151 ************************************ 00:11:16.151 END TEST filesystem_in_capsule_ext4 00:11:16.151 ************************************ 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.151 ************************************ 00:11:16.151 START TEST filesystem_in_capsule_btrfs 00:11:16.151 ************************************ 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:16.151 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:16.410 btrfs-progs v6.8.1 00:11:16.410 See https://btrfs.readthedocs.io for more information. 00:11:16.410 00:11:16.410 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:16.411 NOTE: several default settings have changed in version 5.15, please make sure 00:11:16.411 this does not affect your deployments: 00:11:16.411 - DUP for metadata (-m dup) 00:11:16.411 - enabled no-holes (-O no-holes) 00:11:16.411 - enabled free-space-tree (-R free-space-tree) 00:11:16.411 00:11:16.411 Label: (null) 00:11:16.411 UUID: c0c055df-f7bd-4dd7-900c-1439270c15c9 00:11:16.411 Node size: 16384 00:11:16.411 Sector size: 4096 (CPU page size: 4096) 00:11:16.411 Filesystem size: 510.00MiB 00:11:16.411 Block group profiles: 00:11:16.411 Data: single 8.00MiB 00:11:16.411 Metadata: DUP 32.00MiB 00:11:16.411 System: DUP 8.00MiB 00:11:16.411 SSD detected: yes 00:11:16.411 Zoned device: no 00:11:16.411 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:16.411 Checksum: crc32c 00:11:16.411 Number of devices: 1 00:11:16.411 Devices: 00:11:16.411 ID SIZE PATH 00:11:16.411 1 510.00MiB /dev/nvme0n1p1 00:11:16.411 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2725434 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.411 00:11:16.411 real 0m0.246s 00:11:16.411 user 0m0.034s 00:11:16.411 sys 0m0.127s 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.411 ************************************ 00:11:16.411 END TEST filesystem_in_capsule_btrfs 00:11:16.411 ************************************ 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.411 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.672 ************************************ 00:11:16.672 START TEST filesystem_in_capsule_xfs 00:11:16.672 ************************************ 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:16.672 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:16.672 = sectsz=512 attr=2, projid32bit=1 00:11:16.672 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:16.672 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:16.672 data = bsize=4096 blocks=130560, imaxpct=25 00:11:16.672 = sunit=0 swidth=0 blks 00:11:16.672 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:16.672 log =internal log bsize=4096 blocks=16384, version=2 00:11:16.672 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:16.672 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:16.672 Discarding blocks...Done. 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.672 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2725434 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.673 00:11:16.673 real 0m0.209s 00:11:16.673 user 0m0.024s 00:11:16.673 sys 0m0.089s 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.673 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.673 ************************************ 00:11:16.673 END TEST filesystem_in_capsule_xfs 00:11:16.673 ************************************ 00:11:16.932 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.932 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.932 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2725434 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2725434 ']' 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2725434 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2725434 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2725434' 00:11:17.866 killing process with pid 2725434 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2725434 00:11:17.866 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2725434 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:18.434 00:11:18.434 real 0m7.349s 00:11:18.434 user 0m28.570s 00:11:18.434 sys 0m1.222s 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.434 ************************************ 00:11:18.434 END TEST nvmf_filesystem_in_capsule 00:11:18.434 ************************************ 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:18.434 rmmod nvme_rdma 00:11:18.434 rmmod nvme_fabrics 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:11:18.434 00:11:18.434 real 0m21.781s 00:11:18.434 user 0m59.122s 00:11:18.434 sys 0m7.635s 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.434 ************************************ 00:11:18.434 END TEST nvmf_filesystem 00:11:18.434 ************************************ 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.434 ************************************ 00:11:18.434 START TEST nvmf_target_discovery 00:11:18.434 ************************************ 00:11:18.434 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:18.694 * Looking for test storage... 00:11:18.694 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:18.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.694 --rc genhtml_branch_coverage=1 00:11:18.694 --rc genhtml_function_coverage=1 00:11:18.694 --rc genhtml_legend=1 00:11:18.694 --rc geninfo_all_blocks=1 00:11:18.694 --rc geninfo_unexecuted_blocks=1 00:11:18.694 00:11:18.694 ' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:18.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.694 --rc genhtml_branch_coverage=1 00:11:18.694 --rc genhtml_function_coverage=1 00:11:18.694 --rc genhtml_legend=1 00:11:18.694 --rc geninfo_all_blocks=1 00:11:18.694 --rc geninfo_unexecuted_blocks=1 00:11:18.694 00:11:18.694 ' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:18.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.694 --rc genhtml_branch_coverage=1 00:11:18.694 --rc genhtml_function_coverage=1 00:11:18.694 --rc genhtml_legend=1 00:11:18.694 --rc geninfo_all_blocks=1 00:11:18.694 --rc geninfo_unexecuted_blocks=1 00:11:18.694 00:11:18.694 ' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:18.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.694 --rc genhtml_branch_coverage=1 00:11:18.694 --rc genhtml_function_coverage=1 00:11:18.694 --rc genhtml_legend=1 00:11:18.694 --rc geninfo_all_blocks=1 00:11:18.694 --rc geninfo_unexecuted_blocks=1 00:11:18.694 00:11:18.694 ' 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.694 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.695 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.695 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.811 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:26.812 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:26.812 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:26.812 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:26.812 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # rdma_device_init 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:26.812 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.812 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:26.812 altname enp217s0f0np0 00:11:26.812 altname ens818f0np0 00:11:26.812 inet 192.168.100.8/24 scope global mlx_0_0 00:11:26.812 valid_lft forever preferred_lft forever 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:26.812 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.812 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:26.812 altname enp217s0f1np1 00:11:26.812 altname ens818f1np1 00:11:26.812 inet 192.168.100.9/24 scope global mlx_0_1 00:11:26.812 valid_lft forever preferred_lft forever 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:26.812 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:26.812 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:26.813 192.168.100.9' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:26.813 192.168.100.9' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # head -n 1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:26.813 192.168.100.9' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # tail -n +2 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # head -n 1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=2730377 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 2730377 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2730377 ']' 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 [2024-12-15 15:58:54.161040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:26.813 [2024-12-15 15:58:54.161090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.813 [2024-12-15 15:58:54.233322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.813 [2024-12-15 15:58:54.273666] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.813 [2024-12-15 15:58:54.273721] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.813 [2024-12-15 15:58:54.273730] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.813 [2024-12-15 15:58:54.273738] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.813 [2024-12-15 15:58:54.273761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.813 [2024-12-15 15:58:54.273816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.813 [2024-12-15 15:58:54.273902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.813 [2024-12-15 15:58:54.273986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.813 [2024-12-15 15:58:54.273988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 [2024-12-15 15:58:54.450486] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe50e40/0xe55330) succeed. 00:11:26.813 [2024-12-15 15:58:54.461407] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe52480/0xe969d0) succeed. 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 Null1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 [2024-12-15 15:58:54.625309] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.813 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.813 Null2 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 Null3 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 Null4 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:11:26.814 00:11:26.814 Discovery Log Number of Records 6, Generation counter 6 00:11:26.814 =====Discovery Log Entry 0====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: current discovery subsystem 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4420 00:11:26.814 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: explicit discovery connections, duplicate discovery information 00:11:26.814 rdma_prtype: not specified 00:11:26.814 rdma_qptype: connected 00:11:26.814 rdma_cms: rdma-cm 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 =====Discovery Log Entry 1====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: nvme subsystem 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4420 00:11:26.814 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: none 00:11:26.814 rdma_prtype: not specified 00:11:26.814 rdma_qptype: connected 00:11:26.814 rdma_cms: rdma-cm 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 =====Discovery Log Entry 2====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: nvme subsystem 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4420 00:11:26.814 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: none 00:11:26.814 rdma_prtype: not specified 00:11:26.814 rdma_qptype: connected 00:11:26.814 rdma_cms: rdma-cm 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 =====Discovery Log Entry 3====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: nvme subsystem 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4420 00:11:26.814 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: none 00:11:26.814 rdma_prtype: not specified 00:11:26.814 rdma_qptype: connected 00:11:26.814 rdma_cms: rdma-cm 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 =====Discovery Log Entry 4====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: nvme subsystem 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4420 00:11:26.814 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: none 00:11:26.814 rdma_prtype: not specified 00:11:26.814 rdma_qptype: connected 00:11:26.814 rdma_cms: rdma-cm 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 =====Discovery Log Entry 5====== 00:11:26.814 trtype: rdma 00:11:26.814 adrfam: ipv4 00:11:26.814 subtype: discovery subsystem referral 00:11:26.814 treq: not required 00:11:26.814 portid: 0 00:11:26.814 trsvcid: 4430 00:11:26.814 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.814 traddr: 192.168.100.8 00:11:26.814 eflags: none 00:11:26.814 rdma_prtype: unrecognized 00:11:26.814 rdma_qptype: unrecognized 00:11:26.814 rdma_cms: unrecognized 00:11:26.814 rdma_pkey: 0x0000 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.814 Perform nvmf subsystem discovery via RPC 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.814 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.814 [ 00:11:26.814 { 00:11:26.814 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.814 "subtype": "Discovery", 00:11:26.814 "listen_addresses": [ 00:11:26.814 { 00:11:26.814 "trtype": "RDMA", 00:11:26.814 "adrfam": "IPv4", 00:11:26.814 "traddr": "192.168.100.8", 00:11:26.814 "trsvcid": "4420" 00:11:26.814 } 00:11:26.814 ], 00:11:26.814 "allow_any_host": true, 00:11:26.814 "hosts": [] 00:11:26.814 }, 00:11:26.814 { 00:11:26.814 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.814 "subtype": "NVMe", 00:11:26.814 "listen_addresses": [ 00:11:26.814 { 00:11:26.814 "trtype": "RDMA", 00:11:26.814 "adrfam": "IPv4", 00:11:26.814 "traddr": "192.168.100.8", 00:11:26.814 "trsvcid": "4420" 00:11:26.815 } 00:11:26.815 ], 00:11:26.815 "allow_any_host": true, 00:11:26.815 "hosts": [], 00:11:26.815 "serial_number": "SPDK00000000000001", 00:11:26.815 "model_number": "SPDK bdev Controller", 00:11:26.815 "max_namespaces": 32, 00:11:26.815 "min_cntlid": 1, 00:11:26.815 "max_cntlid": 65519, 00:11:26.815 "namespaces": [ 00:11:26.815 { 00:11:26.815 "nsid": 1, 00:11:26.815 "bdev_name": "Null1", 00:11:26.815 "name": "Null1", 00:11:26.815 "nguid": "CB51CF0F0CEE4CB3933B9562EC1FD2CB", 00:11:26.815 "uuid": "cb51cf0f-0cee-4cb3-933b-9562ec1fd2cb" 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.815 "subtype": "NVMe", 00:11:26.815 "listen_addresses": [ 00:11:26.815 { 00:11:26.815 "trtype": "RDMA", 00:11:26.815 "adrfam": "IPv4", 00:11:26.815 "traddr": "192.168.100.8", 00:11:26.815 "trsvcid": "4420" 00:11:26.815 } 00:11:26.815 ], 00:11:26.815 "allow_any_host": true, 00:11:26.815 "hosts": [], 00:11:26.815 "serial_number": "SPDK00000000000002", 00:11:26.815 "model_number": "SPDK bdev Controller", 00:11:26.815 "max_namespaces": 32, 00:11:26.815 "min_cntlid": 1, 00:11:26.815 "max_cntlid": 65519, 00:11:26.815 "namespaces": [ 00:11:26.815 { 00:11:26.815 "nsid": 1, 00:11:26.815 "bdev_name": "Null2", 00:11:26.815 "name": "Null2", 00:11:26.815 "nguid": "7462C0E03C3D455796D80E83C941B945", 00:11:26.815 "uuid": "7462c0e0-3c3d-4557-96d8-0e83c941b945" 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.815 "subtype": "NVMe", 00:11:26.815 "listen_addresses": [ 00:11:26.815 { 00:11:26.815 "trtype": "RDMA", 00:11:26.815 "adrfam": "IPv4", 00:11:26.815 "traddr": "192.168.100.8", 00:11:26.815 "trsvcid": "4420" 00:11:26.815 } 00:11:26.815 ], 00:11:26.815 "allow_any_host": true, 00:11:26.815 "hosts": [], 00:11:26.815 "serial_number": "SPDK00000000000003", 00:11:26.815 "model_number": "SPDK bdev Controller", 00:11:26.815 "max_namespaces": 32, 00:11:26.815 "min_cntlid": 1, 00:11:26.815 "max_cntlid": 65519, 00:11:26.815 "namespaces": [ 00:11:26.815 { 00:11:26.815 "nsid": 1, 00:11:26.815 "bdev_name": "Null3", 00:11:26.815 "name": "Null3", 00:11:26.815 "nguid": "22CE25D37BFE4862A24289E925C01E9D", 00:11:26.815 "uuid": "22ce25d3-7bfe-4862-a242-89e925c01e9d" 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.815 "subtype": "NVMe", 00:11:26.815 "listen_addresses": [ 00:11:26.815 { 00:11:26.815 "trtype": "RDMA", 00:11:26.815 "adrfam": "IPv4", 00:11:26.815 "traddr": "192.168.100.8", 00:11:26.815 "trsvcid": "4420" 00:11:26.815 } 00:11:26.815 ], 00:11:26.815 "allow_any_host": true, 00:11:26.815 "hosts": [], 00:11:26.815 "serial_number": "SPDK00000000000004", 00:11:26.815 "model_number": "SPDK bdev Controller", 00:11:26.815 "max_namespaces": 32, 00:11:26.815 "min_cntlid": 1, 00:11:26.815 "max_cntlid": 65519, 00:11:26.815 "namespaces": [ 00:11:26.815 { 00:11:26.815 "nsid": 1, 00:11:26.815 "bdev_name": "Null4", 00:11:26.815 "name": "Null4", 00:11:26.815 "nguid": "276471161E994696B708548A6D9D9849", 00:11:26.815 "uuid": "27647116-1e99-4696-b708-548a6d9d9849" 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:26.815 rmmod nvme_rdma 00:11:26.815 rmmod nvme_fabrics 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 2730377 ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 2730377 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2730377 ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2730377 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2730377 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.815 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.816 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2730377' 00:11:26.816 killing process with pid 2730377 00:11:26.816 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2730377 00:11:26.816 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2730377 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:11:27.074 00:11:27.074 real 0m8.428s 00:11:27.074 user 0m6.416s 00:11:27.074 sys 0m5.755s 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.074 ************************************ 00:11:27.074 END TEST nvmf_target_discovery 00:11:27.074 ************************************ 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.074 ************************************ 00:11:27.074 START TEST nvmf_referrals 00:11:27.074 ************************************ 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:27.074 * Looking for test storage... 00:11:27.074 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.074 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.333 --rc genhtml_branch_coverage=1 00:11:27.333 --rc genhtml_function_coverage=1 00:11:27.333 --rc genhtml_legend=1 00:11:27.333 --rc geninfo_all_blocks=1 00:11:27.333 --rc geninfo_unexecuted_blocks=1 00:11:27.333 00:11:27.333 ' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.333 --rc genhtml_branch_coverage=1 00:11:27.333 --rc genhtml_function_coverage=1 00:11:27.333 --rc genhtml_legend=1 00:11:27.333 --rc geninfo_all_blocks=1 00:11:27.333 --rc geninfo_unexecuted_blocks=1 00:11:27.333 00:11:27.333 ' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.333 --rc genhtml_branch_coverage=1 00:11:27.333 --rc genhtml_function_coverage=1 00:11:27.333 --rc genhtml_legend=1 00:11:27.333 --rc geninfo_all_blocks=1 00:11:27.333 --rc geninfo_unexecuted_blocks=1 00:11:27.333 00:11:27.333 ' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.333 --rc genhtml_branch_coverage=1 00:11:27.333 --rc genhtml_function_coverage=1 00:11:27.333 --rc genhtml_legend=1 00:11:27.333 --rc geninfo_all_blocks=1 00:11:27.333 --rc geninfo_unexecuted_blocks=1 00:11:27.333 00:11:27.333 ' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.333 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.334 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.334 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:33.897 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.897 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:33.898 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:33.898 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:33.898 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # rdma_device_init 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:33.898 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:34.157 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.157 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:34.157 altname enp217s0f0np0 00:11:34.157 altname ens818f0np0 00:11:34.157 inet 192.168.100.8/24 scope global mlx_0_0 00:11:34.157 valid_lft forever preferred_lft forever 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:34.157 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.157 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:34.157 altname enp217s0f1np1 00:11:34.157 altname ens818f1np1 00:11:34.157 inet 192.168.100.9/24 scope global mlx_0_1 00:11:34.157 valid_lft forever preferred_lft forever 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.157 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:34.158 192.168.100.9' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:34.158 192.168.100.9' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # head -n 1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:34.158 192.168.100.9' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # head -n 1 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # tail -n +2 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=2733851 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 2733851 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2733851 ']' 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.158 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.416 [2024-12-15 15:59:02.733259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:34.416 [2024-12-15 15:59:02.733310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.416 [2024-12-15 15:59:02.807960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.416 [2024-12-15 15:59:02.847991] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.416 [2024-12-15 15:59:02.848027] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.416 [2024-12-15 15:59:02.848037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.416 [2024-12-15 15:59:02.848046] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.416 [2024-12-15 15:59:02.848054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.416 [2024-12-15 15:59:02.848102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.416 [2024-12-15 15:59:02.848123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.416 [2024-12-15 15:59:02.848219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.416 [2024-12-15 15:59:02.848220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.416 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.416 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:34.416 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:34.416 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.416 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 [2024-12-15 15:59:03.033489] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a59e40/0x1a5e330) succeed. 00:11:34.675 [2024-12-15 15:59:03.043895] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a5b480/0x1a9f9d0) succeed. 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 [2024-12-15 15:59:03.167772] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:34.675 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:34.676 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:34.934 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.935 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.935 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:34.935 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.935 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:35.193 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.452 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.452 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.711 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:35.969 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.970 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:35.970 rmmod nvme_rdma 00:11:35.970 rmmod nvme_fabrics 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 2733851 ']' 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 2733851 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2733851 ']' 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2733851 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2733851 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2733851' 00:11:36.228 killing process with pid 2733851 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2733851 00:11:36.228 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2733851 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:11:36.488 00:11:36.488 real 0m9.407s 00:11:36.488 user 0m10.734s 00:11:36.488 sys 0m6.209s 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.488 ************************************ 00:11:36.488 END TEST nvmf_referrals 00:11:36.488 ************************************ 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.488 ************************************ 00:11:36.488 START TEST nvmf_connect_disconnect 00:11:36.488 ************************************ 00:11:36.488 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:36.746 * Looking for test storage... 00:11:36.746 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:36.746 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:36.746 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:36.746 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.747 --rc genhtml_branch_coverage=1 00:11:36.747 --rc genhtml_function_coverage=1 00:11:36.747 --rc genhtml_legend=1 00:11:36.747 --rc geninfo_all_blocks=1 00:11:36.747 --rc geninfo_unexecuted_blocks=1 00:11:36.747 00:11:36.747 ' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.747 --rc genhtml_branch_coverage=1 00:11:36.747 --rc genhtml_function_coverage=1 00:11:36.747 --rc genhtml_legend=1 00:11:36.747 --rc geninfo_all_blocks=1 00:11:36.747 --rc geninfo_unexecuted_blocks=1 00:11:36.747 00:11:36.747 ' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.747 --rc genhtml_branch_coverage=1 00:11:36.747 --rc genhtml_function_coverage=1 00:11:36.747 --rc genhtml_legend=1 00:11:36.747 --rc geninfo_all_blocks=1 00:11:36.747 --rc geninfo_unexecuted_blocks=1 00:11:36.747 00:11:36.747 ' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.747 --rc genhtml_branch_coverage=1 00:11:36.747 --rc genhtml_function_coverage=1 00:11:36.747 --rc genhtml_legend=1 00:11:36.747 --rc geninfo_all_blocks=1 00:11:36.747 --rc geninfo_unexecuted_blocks=1 00:11:36.747 00:11:36.747 ' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.747 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.747 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:43.305 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:43.305 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:43.305 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:43.305 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # rdma_device_init 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.305 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.306 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:43.611 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.611 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:43.611 altname enp217s0f0np0 00:11:43.611 altname ens818f0np0 00:11:43.611 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.611 valid_lft forever preferred_lft forever 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:43.611 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.611 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:43.611 altname enp217s0f1np1 00:11:43.611 altname ens818f1np1 00:11:43.611 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.611 valid_lft forever preferred_lft forever 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.611 192.168.100.9' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:43.611 192.168.100.9' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # head -n 1 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:43.611 192.168.100.9' 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # tail -n +2 00:11:43.611 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # head -n 1 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:43.612 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=2737641 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 2737641 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2737641 ']' 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.612 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.612 [2024-12-15 15:59:12.076350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:43.612 [2024-12-15 15:59:12.076407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.899 [2024-12-15 15:59:12.148892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.899 [2024-12-15 15:59:12.189472] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.899 [2024-12-15 15:59:12.189512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.899 [2024-12-15 15:59:12.189521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.899 [2024-12-15 15:59:12.189530] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.899 [2024-12-15 15:59:12.189538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.899 [2024-12-15 15:59:12.189583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.899 [2024-12-15 15:59:12.189667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.899 [2024-12-15 15:59:12.189759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.899 [2024-12-15 15:59:12.189762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.899 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 [2024-12-15 15:59:12.353770] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:43.899 [2024-12-15 15:59:12.376876] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af0e40/0x1af5330) succeed. 00:11:43.899 [2024-12-15 15:59:12.387681] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1af2480/0x1b369d0) succeed. 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 [2024-12-15 15:59:12.528078] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:44.163 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:47.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:59.146 rmmod nvme_rdma 00:16:59.146 rmmod nvme_fabrics 00:16:59.146 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.405 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 2737641 ']' 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 2737641 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2737641 ']' 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2737641 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2737641 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2737641' 00:16:59.406 killing process with pid 2737641 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2737641 00:16:59.406 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2737641 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:16:59.665 00:16:59.665 real 5m23.067s 00:16:59.665 user 21m0.172s 00:16:59.665 sys 0m17.980s 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:59.665 ************************************ 00:16:59.665 END TEST nvmf_connect_disconnect 00:16:59.665 ************************************ 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.665 ************************************ 00:16:59.665 START TEST nvmf_multitarget 00:16:59.665 ************************************ 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:59.665 * Looking for test storage... 00:16:59.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:59.665 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.925 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.926 --rc genhtml_branch_coverage=1 00:16:59.926 --rc genhtml_function_coverage=1 00:16:59.926 --rc genhtml_legend=1 00:16:59.926 --rc geninfo_all_blocks=1 00:16:59.926 --rc geninfo_unexecuted_blocks=1 00:16:59.926 00:16:59.926 ' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.926 --rc genhtml_branch_coverage=1 00:16:59.926 --rc genhtml_function_coverage=1 00:16:59.926 --rc genhtml_legend=1 00:16:59.926 --rc geninfo_all_blocks=1 00:16:59.926 --rc geninfo_unexecuted_blocks=1 00:16:59.926 00:16:59.926 ' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.926 --rc genhtml_branch_coverage=1 00:16:59.926 --rc genhtml_function_coverage=1 00:16:59.926 --rc genhtml_legend=1 00:16:59.926 --rc geninfo_all_blocks=1 00:16:59.926 --rc geninfo_unexecuted_blocks=1 00:16:59.926 00:16:59.926 ' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.926 --rc genhtml_branch_coverage=1 00:16:59.926 --rc genhtml_function_coverage=1 00:16:59.926 --rc genhtml_legend=1 00:16:59.926 --rc geninfo_all_blocks=1 00:16:59.926 --rc geninfo_unexecuted_blocks=1 00:16:59.926 00:16:59.926 ' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.926 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.926 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:06.502 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:06.502 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:06.502 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:06.502 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # rdma_device_init 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@526 -- # allocate_nic_ips 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.502 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:06.503 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.503 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:06.503 altname enp217s0f0np0 00:17:06.503 altname ens818f0np0 00:17:06.503 inet 192.168.100.8/24 scope global mlx_0_0 00:17:06.503 valid_lft forever preferred_lft forever 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:06.503 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.503 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:06.503 altname enp217s0f1np1 00:17:06.503 altname ens818f1np1 00:17:06.503 inet 192.168.100.9/24 scope global mlx_0_1 00:17:06.503 valid_lft forever preferred_lft forever 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.503 192.168.100.9' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:17:06.503 192.168.100.9' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # head -n 1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # head -n 1 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:17:06.503 192.168.100.9' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # tail -n +2 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=2797052 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 2797052 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2797052 ']' 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:06.503 [2024-12-15 16:04:34.506410] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:06.503 [2024-12-15 16:04:34.506464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.503 [2024-12-15 16:04:34.576773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.503 [2024-12-15 16:04:34.615315] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.503 [2024-12-15 16:04:34.615356] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.503 [2024-12-15 16:04:34.615366] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.503 [2024-12-15 16:04:34.615374] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.503 [2024-12-15 16:04:34.615380] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.503 [2024-12-15 16:04:34.615476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.503 [2024-12-15 16:04:34.615589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.503 [2024-12-15 16:04:34.615676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.503 [2024-12-15 16:04:34.615677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:06.503 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:06.504 "nvmf_tgt_1" 00:17:06.504 16:04:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:06.762 "nvmf_tgt_2" 00:17:06.762 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.762 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:06.762 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:06.762 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:06.762 true 00:17:06.762 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:07.021 true 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:07.021 rmmod nvme_rdma 00:17:07.021 rmmod nvme_fabrics 00:17:07.021 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 2797052 ']' 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2797052 ']' 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2797052' 00:17:07.281 killing process with pid 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2797052 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:07.281 00:17:07.281 real 0m7.726s 00:17:07.281 user 0m7.303s 00:17:07.281 sys 0m5.201s 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.281 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:07.281 ************************************ 00:17:07.281 END TEST nvmf_multitarget 00:17:07.281 ************************************ 00:17:07.541 16:04:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:07.541 16:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.541 16:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.541 16:04:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.541 ************************************ 00:17:07.541 START TEST nvmf_rpc 00:17:07.541 ************************************ 00:17:07.541 16:04:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:07.541 * Looking for test storage... 00:17:07.541 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:07.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.541 --rc genhtml_branch_coverage=1 00:17:07.541 --rc genhtml_function_coverage=1 00:17:07.541 --rc genhtml_legend=1 00:17:07.541 --rc geninfo_all_blocks=1 00:17:07.541 --rc geninfo_unexecuted_blocks=1 00:17:07.541 00:17:07.541 ' 00:17:07.541 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:07.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.541 --rc genhtml_branch_coverage=1 00:17:07.541 --rc genhtml_function_coverage=1 00:17:07.541 --rc genhtml_legend=1 00:17:07.541 --rc geninfo_all_blocks=1 00:17:07.541 --rc geninfo_unexecuted_blocks=1 00:17:07.541 00:17:07.541 ' 00:17:07.542 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.542 --rc genhtml_branch_coverage=1 00:17:07.542 --rc genhtml_function_coverage=1 00:17:07.542 --rc genhtml_legend=1 00:17:07.542 --rc geninfo_all_blocks=1 00:17:07.542 --rc geninfo_unexecuted_blocks=1 00:17:07.542 00:17:07.542 ' 00:17:07.542 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.542 --rc genhtml_branch_coverage=1 00:17:07.542 --rc genhtml_function_coverage=1 00:17:07.542 --rc genhtml_legend=1 00:17:07.542 --rc geninfo_all_blocks=1 00:17:07.542 --rc geninfo_unexecuted_blocks=1 00:17:07.542 00:17:07.542 ' 00:17:07.542 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.542 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:07.801 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.802 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.802 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:14.376 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:14.376 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:14.376 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:14.376 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # rdma_device_init 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@526 -- # allocate_nic_ips 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:14.376 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:14.376 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.376 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:14.376 altname enp217s0f0np0 00:17:14.376 altname ens818f0np0 00:17:14.377 inet 192.168.100.8/24 scope global mlx_0_0 00:17:14.377 valid_lft forever preferred_lft forever 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:14.377 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.377 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:14.377 altname enp217s0f1np1 00:17:14.377 altname ens818f1np1 00:17:14.377 inet 192.168.100.9/24 scope global mlx_0_1 00:17:14.377 valid_lft forever preferred_lft forever 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:17:14.377 192.168.100.9' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:17:14.377 192.168.100.9' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # head -n 1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:17:14.377 192.168.100.9' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # tail -n +2 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # head -n 1 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=2800513 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 2800513 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2800513 ']' 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.377 [2024-12-15 16:04:42.293428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.377 [2024-12-15 16:04:42.293476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.377 [2024-12-15 16:04:42.362470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.377 [2024-12-15 16:04:42.402449] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.377 [2024-12-15 16:04:42.402489] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.377 [2024-12-15 16:04:42.402499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.377 [2024-12-15 16:04:42.402507] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.377 [2024-12-15 16:04:42.402514] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.377 [2024-12-15 16:04:42.402560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.377 [2024-12-15 16:04:42.402658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.377 [2024-12-15 16:04:42.402742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.377 [2024-12-15 16:04:42.402744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.377 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:14.377 "tick_rate": 2500000000, 00:17:14.377 "poll_groups": [ 00:17:14.377 { 00:17:14.377 "name": "nvmf_tgt_poll_group_000", 00:17:14.377 "admin_qpairs": 0, 00:17:14.377 "io_qpairs": 0, 00:17:14.377 "current_admin_qpairs": 0, 00:17:14.377 "current_io_qpairs": 0, 00:17:14.377 "pending_bdev_io": 0, 00:17:14.377 "completed_nvme_io": 0, 00:17:14.377 "transports": [] 00:17:14.377 }, 00:17:14.377 { 00:17:14.377 "name": "nvmf_tgt_poll_group_001", 00:17:14.377 "admin_qpairs": 0, 00:17:14.377 "io_qpairs": 0, 00:17:14.377 "current_admin_qpairs": 0, 00:17:14.377 "current_io_qpairs": 0, 00:17:14.377 "pending_bdev_io": 0, 00:17:14.377 "completed_nvme_io": 0, 00:17:14.377 "transports": [] 00:17:14.377 }, 00:17:14.377 { 00:17:14.377 "name": "nvmf_tgt_poll_group_002", 00:17:14.377 "admin_qpairs": 0, 00:17:14.377 "io_qpairs": 0, 00:17:14.377 "current_admin_qpairs": 0, 00:17:14.377 "current_io_qpairs": 0, 00:17:14.377 "pending_bdev_io": 0, 00:17:14.377 "completed_nvme_io": 0, 00:17:14.377 "transports": [] 00:17:14.377 }, 00:17:14.377 { 00:17:14.377 "name": "nvmf_tgt_poll_group_003", 00:17:14.377 "admin_qpairs": 0, 00:17:14.377 "io_qpairs": 0, 00:17:14.377 "current_admin_qpairs": 0, 00:17:14.377 "current_io_qpairs": 0, 00:17:14.377 "pending_bdev_io": 0, 00:17:14.377 "completed_nvme_io": 0, 00:17:14.377 "transports": [] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 }' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.378 [2024-12-15 16:04:42.704945] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x804ec0/0x8093b0) succeed. 00:17:14.378 [2024-12-15 16:04:42.715555] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x806500/0x84aa50) succeed. 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:14.378 "tick_rate": 2500000000, 00:17:14.378 "poll_groups": [ 00:17:14.378 { 00:17:14.378 "name": "nvmf_tgt_poll_group_000", 00:17:14.378 "admin_qpairs": 0, 00:17:14.378 "io_qpairs": 0, 00:17:14.378 "current_admin_qpairs": 0, 00:17:14.378 "current_io_qpairs": 0, 00:17:14.378 "pending_bdev_io": 0, 00:17:14.378 "completed_nvme_io": 0, 00:17:14.378 "transports": [ 00:17:14.378 { 00:17:14.378 "trtype": "RDMA", 00:17:14.378 "pending_data_buffer": 0, 00:17:14.378 "devices": [ 00:17:14.378 { 00:17:14.378 "name": "mlx5_0", 00:17:14.378 "polls": 16091, 00:17:14.378 "idle_polls": 16091, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "mlx5_1", 00:17:14.378 "polls": 16091, 00:17:14.378 "idle_polls": 16091, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "nvmf_tgt_poll_group_001", 00:17:14.378 "admin_qpairs": 0, 00:17:14.378 "io_qpairs": 0, 00:17:14.378 "current_admin_qpairs": 0, 00:17:14.378 "current_io_qpairs": 0, 00:17:14.378 "pending_bdev_io": 0, 00:17:14.378 "completed_nvme_io": 0, 00:17:14.378 "transports": [ 00:17:14.378 { 00:17:14.378 "trtype": "RDMA", 00:17:14.378 "pending_data_buffer": 0, 00:17:14.378 "devices": [ 00:17:14.378 { 00:17:14.378 "name": "mlx5_0", 00:17:14.378 "polls": 10096, 00:17:14.378 "idle_polls": 10096, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "mlx5_1", 00:17:14.378 "polls": 10096, 00:17:14.378 "idle_polls": 10096, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "nvmf_tgt_poll_group_002", 00:17:14.378 "admin_qpairs": 0, 00:17:14.378 "io_qpairs": 0, 00:17:14.378 "current_admin_qpairs": 0, 00:17:14.378 "current_io_qpairs": 0, 00:17:14.378 "pending_bdev_io": 0, 00:17:14.378 "completed_nvme_io": 0, 00:17:14.378 "transports": [ 00:17:14.378 { 00:17:14.378 "trtype": "RDMA", 00:17:14.378 "pending_data_buffer": 0, 00:17:14.378 "devices": [ 00:17:14.378 { 00:17:14.378 "name": "mlx5_0", 00:17:14.378 "polls": 5629, 00:17:14.378 "idle_polls": 5629, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "mlx5_1", 00:17:14.378 "polls": 5629, 00:17:14.378 "idle_polls": 5629, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "nvmf_tgt_poll_group_003", 00:17:14.378 "admin_qpairs": 0, 00:17:14.378 "io_qpairs": 0, 00:17:14.378 "current_admin_qpairs": 0, 00:17:14.378 "current_io_qpairs": 0, 00:17:14.378 "pending_bdev_io": 0, 00:17:14.378 "completed_nvme_io": 0, 00:17:14.378 "transports": [ 00:17:14.378 { 00:17:14.378 "trtype": "RDMA", 00:17:14.378 "pending_data_buffer": 0, 00:17:14.378 "devices": [ 00:17:14.378 { 00:17:14.378 "name": "mlx5_0", 00:17:14.378 "polls": 932, 00:17:14.378 "idle_polls": 932, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 }, 00:17:14.378 { 00:17:14.378 "name": "mlx5_1", 00:17:14.378 "polls": 932, 00:17:14.378 "idle_polls": 932, 00:17:14.378 "completions": 0, 00:17:14.378 "requests": 0, 00:17:14.378 "request_latency": 0, 00:17:14.378 "pending_free_request": 0, 00:17:14.378 "pending_rdma_read": 0, 00:17:14.378 "pending_rdma_write": 0, 00:17:14.378 "pending_rdma_send": 0, 00:17:14.378 "total_send_wrs": 0, 00:17:14.378 "send_doorbell_updates": 0, 00:17:14.378 "total_recv_wrs": 4096, 00:17:14.378 "recv_doorbell_updates": 1 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 } 00:17:14.378 ] 00:17:14.378 }' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:14.378 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:14.639 16:04:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 Malloc1 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.639 [2024-12-15 16:04:43.143026] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:14.639 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:17:14.639 [2024-12-15 16:04:43.193318] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:17:14.899 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:14.899 could not add new controller: failed to write to nvme-fabrics device 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.899 16:04:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:15.837 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.837 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.837 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.837 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.837 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:17.743 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:18.679 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:18.939 [2024-12-15 16:04:47.284821] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:17:18.939 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:18.939 could not add new controller: failed to write to nvme-fabrics device 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.939 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:19.877 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.877 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:19.877 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.877 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:19.877 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:21.785 16:04:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.723 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.723 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:22.723 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:22.723 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.982 [2024-12-15 16:04:51.345293] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.982 16:04:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:23.921 16:04:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.921 16:04:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:23.921 16:04:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.921 16:04:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:23.921 16:04:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:25.827 16:04:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.764 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.764 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:26.764 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:26.764 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 [2024-12-15 16:04:55.386502] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.023 16:04:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:27.960 16:04:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:27.960 16:04:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:27.960 16:04:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.960 16:04:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:27.960 16:04:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:29.998 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 [2024-12-15 16:04:59.420131] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.936 16:04:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:31.873 16:05:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:31.873 16:05:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:31.873 16:05:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.873 16:05:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:31.873 16:05:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:34.410 16:05:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 [2024-12-15 16:05:03.478537] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.979 16:05:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:35.918 16:05:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:35.918 16:05:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:35.918 16:05:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.918 16:05:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:35.918 16:05:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:38.453 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.021 [2024-12-15 16:05:07.522517] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.021 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.022 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:39.959 16:05:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:39.959 16:05:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:39.959 16:05:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.959 16:05:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:39.959 16:05:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:42.497 16:05:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 [2024-12-15 16:05:11.568584] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 [2024-12-15 16:05:11.616854] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.064 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.324 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.324 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.324 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.324 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.324 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 [2024-12-15 16:05:11.665043] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 [2024-12-15 16:05:11.713175] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 [2024-12-15 16:05:11.761372] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.325 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:43.325 "tick_rate": 2500000000, 00:17:43.325 "poll_groups": [ 00:17:43.325 { 00:17:43.325 "name": "nvmf_tgt_poll_group_000", 00:17:43.325 "admin_qpairs": 2, 00:17:43.325 "io_qpairs": 27, 00:17:43.325 "current_admin_qpairs": 0, 00:17:43.325 "current_io_qpairs": 0, 00:17:43.325 "pending_bdev_io": 0, 00:17:43.325 "completed_nvme_io": 180, 00:17:43.325 "transports": [ 00:17:43.325 { 00:17:43.325 "trtype": "RDMA", 00:17:43.325 "pending_data_buffer": 0, 00:17:43.325 "devices": [ 00:17:43.325 { 00:17:43.325 "name": "mlx5_0", 00:17:43.325 "polls": 3621107, 00:17:43.325 "idle_polls": 3620698, 00:17:43.325 "completions": 471, 00:17:43.325 "requests": 235, 00:17:43.325 "request_latency": 51051560, 00:17:43.325 "pending_free_request": 0, 00:17:43.325 "pending_rdma_read": 0, 00:17:43.325 "pending_rdma_write": 0, 00:17:43.325 "pending_rdma_send": 0, 00:17:43.325 "total_send_wrs": 414, 00:17:43.325 "send_doorbell_updates": 202, 00:17:43.325 "total_recv_wrs": 4331, 00:17:43.325 "recv_doorbell_updates": 202 00:17:43.325 }, 00:17:43.325 { 00:17:43.325 "name": "mlx5_1", 00:17:43.325 "polls": 3621107, 00:17:43.325 "idle_polls": 3621107, 00:17:43.325 "completions": 0, 00:17:43.325 "requests": 0, 00:17:43.325 "request_latency": 0, 00:17:43.325 "pending_free_request": 0, 00:17:43.325 "pending_rdma_read": 0, 00:17:43.325 "pending_rdma_write": 0, 00:17:43.325 "pending_rdma_send": 0, 00:17:43.325 "total_send_wrs": 0, 00:17:43.325 "send_doorbell_updates": 0, 00:17:43.325 "total_recv_wrs": 4096, 00:17:43.325 "recv_doorbell_updates": 1 00:17:43.325 } 00:17:43.325 ] 00:17:43.325 } 00:17:43.325 ] 00:17:43.325 }, 00:17:43.325 { 00:17:43.325 "name": "nvmf_tgt_poll_group_001", 00:17:43.325 "admin_qpairs": 2, 00:17:43.325 "io_qpairs": 26, 00:17:43.325 "current_admin_qpairs": 0, 00:17:43.325 "current_io_qpairs": 0, 00:17:43.325 "pending_bdev_io": 0, 00:17:43.325 "completed_nvme_io": 121, 00:17:43.326 "transports": [ 00:17:43.326 { 00:17:43.326 "trtype": "RDMA", 00:17:43.326 "pending_data_buffer": 0, 00:17:43.326 "devices": [ 00:17:43.326 { 00:17:43.326 "name": "mlx5_0", 00:17:43.326 "polls": 3579703, 00:17:43.326 "idle_polls": 3579392, 00:17:43.326 "completions": 346, 00:17:43.326 "requests": 173, 00:17:43.326 "request_latency": 33940534, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 292, 00:17:43.326 "send_doorbell_updates": 150, 00:17:43.326 "total_recv_wrs": 4269, 00:17:43.326 "recv_doorbell_updates": 151 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "mlx5_1", 00:17:43.326 "polls": 3579703, 00:17:43.326 "idle_polls": 3579703, 00:17:43.326 "completions": 0, 00:17:43.326 "requests": 0, 00:17:43.326 "request_latency": 0, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 0, 00:17:43.326 "send_doorbell_updates": 0, 00:17:43.326 "total_recv_wrs": 4096, 00:17:43.326 "recv_doorbell_updates": 1 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "nvmf_tgt_poll_group_002", 00:17:43.326 "admin_qpairs": 1, 00:17:43.326 "io_qpairs": 26, 00:17:43.326 "current_admin_qpairs": 0, 00:17:43.326 "current_io_qpairs": 0, 00:17:43.326 "pending_bdev_io": 0, 00:17:43.326 "completed_nvme_io": 77, 00:17:43.326 "transports": [ 00:17:43.326 { 00:17:43.326 "trtype": "RDMA", 00:17:43.326 "pending_data_buffer": 0, 00:17:43.326 "devices": [ 00:17:43.326 { 00:17:43.326 "name": "mlx5_0", 00:17:43.326 "polls": 3694991, 00:17:43.326 "idle_polls": 3694800, 00:17:43.326 "completions": 211, 00:17:43.326 "requests": 105, 00:17:43.326 "request_latency": 19750682, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 170, 00:17:43.326 "send_doorbell_updates": 95, 00:17:43.326 "total_recv_wrs": 4201, 00:17:43.326 "recv_doorbell_updates": 95 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "mlx5_1", 00:17:43.326 "polls": 3694991, 00:17:43.326 "idle_polls": 3694991, 00:17:43.326 "completions": 0, 00:17:43.326 "requests": 0, 00:17:43.326 "request_latency": 0, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 0, 00:17:43.326 "send_doorbell_updates": 0, 00:17:43.326 "total_recv_wrs": 4096, 00:17:43.326 "recv_doorbell_updates": 1 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "nvmf_tgt_poll_group_003", 00:17:43.326 "admin_qpairs": 2, 00:17:43.326 "io_qpairs": 26, 00:17:43.326 "current_admin_qpairs": 0, 00:17:43.326 "current_io_qpairs": 0, 00:17:43.326 "pending_bdev_io": 0, 00:17:43.326 "completed_nvme_io": 77, 00:17:43.326 "transports": [ 00:17:43.326 { 00:17:43.326 "trtype": "RDMA", 00:17:43.326 "pending_data_buffer": 0, 00:17:43.326 "devices": [ 00:17:43.326 { 00:17:43.326 "name": "mlx5_0", 00:17:43.326 "polls": 2925214, 00:17:43.326 "idle_polls": 2924975, 00:17:43.326 "completions": 260, 00:17:43.326 "requests": 130, 00:17:43.326 "request_latency": 22767570, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 206, 00:17:43.326 "send_doorbell_updates": 118, 00:17:43.326 "total_recv_wrs": 4226, 00:17:43.326 "recv_doorbell_updates": 119 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "mlx5_1", 00:17:43.326 "polls": 2925214, 00:17:43.326 "idle_polls": 2925214, 00:17:43.326 "completions": 0, 00:17:43.326 "requests": 0, 00:17:43.326 "request_latency": 0, 00:17:43.326 "pending_free_request": 0, 00:17:43.326 "pending_rdma_read": 0, 00:17:43.326 "pending_rdma_write": 0, 00:17:43.326 "pending_rdma_send": 0, 00:17:43.326 "total_send_wrs": 0, 00:17:43.326 "send_doorbell_updates": 0, 00:17:43.326 "total_recv_wrs": 4096, 00:17:43.326 "recv_doorbell_updates": 1 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 }' 00:17:43.326 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:43.326 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:43.326 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:43.326 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.586 16:05:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 127510346 > 0 )) 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:43.586 rmmod nvme_rdma 00:17:43.586 rmmod nvme_fabrics 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 2800513 ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 2800513 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2800513 ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2800513 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2800513 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2800513' 00:17:43.586 killing process with pid 2800513 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2800513 00:17:43.586 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2800513 00:17:44.155 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:44.155 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:44.155 00:17:44.155 real 0m36.491s 00:17:44.155 user 2m1.415s 00:17:44.155 sys 0m6.387s 00:17:44.155 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.155 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.155 ************************************ 00:17:44.155 END TEST nvmf_rpc 00:17:44.155 ************************************ 00:17:44.155 16:05:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.156 ************************************ 00:17:44.156 START TEST nvmf_invalid 00:17:44.156 ************************************ 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:44.156 * Looking for test storage... 00:17:44.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.156 --rc genhtml_branch_coverage=1 00:17:44.156 --rc genhtml_function_coverage=1 00:17:44.156 --rc genhtml_legend=1 00:17:44.156 --rc geninfo_all_blocks=1 00:17:44.156 --rc geninfo_unexecuted_blocks=1 00:17:44.156 00:17:44.156 ' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.156 --rc genhtml_branch_coverage=1 00:17:44.156 --rc genhtml_function_coverage=1 00:17:44.156 --rc genhtml_legend=1 00:17:44.156 --rc geninfo_all_blocks=1 00:17:44.156 --rc geninfo_unexecuted_blocks=1 00:17:44.156 00:17:44.156 ' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.156 --rc genhtml_branch_coverage=1 00:17:44.156 --rc genhtml_function_coverage=1 00:17:44.156 --rc genhtml_legend=1 00:17:44.156 --rc geninfo_all_blocks=1 00:17:44.156 --rc geninfo_unexecuted_blocks=1 00:17:44.156 00:17:44.156 ' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.156 --rc genhtml_branch_coverage=1 00:17:44.156 --rc genhtml_function_coverage=1 00:17:44.156 --rc genhtml_legend=1 00:17:44.156 --rc geninfo_all_blocks=1 00:17:44.156 --rc geninfo_unexecuted_blocks=1 00:17:44.156 00:17:44.156 ' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.156 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.156 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:44.157 16:05:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:50.739 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:50.739 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.739 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:50.740 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:50.740 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # rdma_device_init 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@526 -- # allocate_nic_ips 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:50.740 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.740 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:50.740 altname enp217s0f0np0 00:17:50.740 altname ens818f0np0 00:17:50.740 inet 192.168.100.8/24 scope global mlx_0_0 00:17:50.740 valid_lft forever preferred_lft forever 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:50.740 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.740 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:50.740 altname enp217s0f1np1 00:17:50.740 altname ens818f1np1 00:17:50.740 inet 192.168.100.9/24 scope global mlx_0_1 00:17:50.740 valid_lft forever preferred_lft forever 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.740 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:17:51.000 192.168.100.9' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:17:51.000 192.168.100.9' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # head -n 1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # tail -n +2 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # head -n 1 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:17:51.000 192.168.100.9' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:17:51.000 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=2809138 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 2809138 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2809138 ']' 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.001 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.001 [2024-12-15 16:05:19.441137] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:51.001 [2024-12-15 16:05:19.441186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.001 [2024-12-15 16:05:19.511134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.001 [2024-12-15 16:05:19.551031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.001 [2024-12-15 16:05:19.551073] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.001 [2024-12-15 16:05:19.551083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.001 [2024-12-15 16:05:19.551092] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.001 [2024-12-15 16:05:19.551099] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.001 [2024-12-15 16:05:19.551147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.001 [2024-12-15 16:05:19.551244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.001 [2024-12-15 16:05:19.551326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.001 [2024-12-15 16:05:19.551328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:51.260 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12729 00:17:51.520 [2024-12-15 16:05:19.867105] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:51.520 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:51.520 { 00:17:51.520 "nqn": "nqn.2016-06.io.spdk:cnode12729", 00:17:51.520 "tgt_name": "foobar", 00:17:51.520 "method": "nvmf_create_subsystem", 00:17:51.520 "req_id": 1 00:17:51.520 } 00:17:51.520 Got JSON-RPC error response 00:17:51.520 response: 00:17:51.520 { 00:17:51.520 "code": -32603, 00:17:51.520 "message": "Unable to find target foobar" 00:17:51.520 }' 00:17:51.520 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:51.520 { 00:17:51.520 "nqn": "nqn.2016-06.io.spdk:cnode12729", 00:17:51.520 "tgt_name": "foobar", 00:17:51.520 "method": "nvmf_create_subsystem", 00:17:51.520 "req_id": 1 00:17:51.520 } 00:17:51.520 Got JSON-RPC error response 00:17:51.520 response: 00:17:51.520 { 00:17:51.520 "code": -32603, 00:17:51.520 "message": "Unable to find target foobar" 00:17:51.520 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:51.520 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:51.520 16:05:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24978 00:17:51.520 [2024-12-15 16:05:20.071823] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24978: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:51.780 { 00:17:51.780 "nqn": "nqn.2016-06.io.spdk:cnode24978", 00:17:51.780 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:51.780 "method": "nvmf_create_subsystem", 00:17:51.780 "req_id": 1 00:17:51.780 } 00:17:51.780 Got JSON-RPC error response 00:17:51.780 response: 00:17:51.780 { 00:17:51.780 "code": -32602, 00:17:51.780 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:51.780 }' 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:51.780 { 00:17:51.780 "nqn": "nqn.2016-06.io.spdk:cnode24978", 00:17:51.780 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:51.780 "method": "nvmf_create_subsystem", 00:17:51.780 "req_id": 1 00:17:51.780 } 00:17:51.780 Got JSON-RPC error response 00:17:51.780 response: 00:17:51.780 { 00:17:51.780 "code": -32602, 00:17:51.780 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:51.780 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16248 00:17:51.780 [2024-12-15 16:05:20.280508] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16248: invalid model number 'SPDK_Controller' 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:51.780 { 00:17:51.780 "nqn": "nqn.2016-06.io.spdk:cnode16248", 00:17:51.780 "model_number": "SPDK_Controller\u001f", 00:17:51.780 "method": "nvmf_create_subsystem", 00:17:51.780 "req_id": 1 00:17:51.780 } 00:17:51.780 Got JSON-RPC error response 00:17:51.780 response: 00:17:51.780 { 00:17:51.780 "code": -32602, 00:17:51.780 "message": "Invalid MN SPDK_Controller\u001f" 00:17:51.780 }' 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:51.780 { 00:17:51.780 "nqn": "nqn.2016-06.io.spdk:cnode16248", 00:17:51.780 "model_number": "SPDK_Controller\u001f", 00:17:51.780 "method": "nvmf_create_subsystem", 00:17:51.780 "req_id": 1 00:17:51.780 } 00:17:51.780 Got JSON-RPC error response 00:17:51.780 response: 00:17:51.780 { 00:17:51.780 "code": -32602, 00:17:51.780 "message": "Invalid MN SPDK_Controller\u001f" 00:17:51.780 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:51.780 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:51.781 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.040 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'O=I9+p:1t~9MT|GP7dX?i' 00:17:52.041 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'O=I9+p:1t~9MT|GP7dX?i' nqn.2016-06.io.spdk:cnode9339 00:17:52.301 [2024-12-15 16:05:20.657778] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9339: invalid serial number 'O=I9+p:1t~9MT|GP7dX?i' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:52.301 { 00:17:52.301 "nqn": "nqn.2016-06.io.spdk:cnode9339", 00:17:52.301 "serial_number": "O=I9+p:1t~9MT|GP7dX?i", 00:17:52.301 "method": "nvmf_create_subsystem", 00:17:52.301 "req_id": 1 00:17:52.301 } 00:17:52.301 Got JSON-RPC error response 00:17:52.301 response: 00:17:52.301 { 00:17:52.301 "code": -32602, 00:17:52.301 "message": "Invalid SN O=I9+p:1t~9MT|GP7dX?i" 00:17:52.301 }' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:52.301 { 00:17:52.301 "nqn": "nqn.2016-06.io.spdk:cnode9339", 00:17:52.301 "serial_number": "O=I9+p:1t~9MT|GP7dX?i", 00:17:52.301 "method": "nvmf_create_subsystem", 00:17:52.301 "req_id": 1 00:17:52.301 } 00:17:52.301 Got JSON-RPC error response 00:17:52.301 response: 00:17:52.301 { 00:17:52.301 "code": -32602, 00:17:52.301 "message": "Invalid SN O=I9+p:1t~9MT|GP7dX?i" 00:17:52.301 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.301 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:52.302 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.562 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'yAc2!s9*J%(nF*]AB-7\%}V1,/]t|m9`@](!14:%f' 00:17:52.563 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'yAc2!s9*J%(nF*]AB-7\%}V1,/]t|m9`@](!14:%f' nqn.2016-06.io.spdk:cnode28375 00:17:52.823 [2024-12-15 16:05:21.191533] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28375: invalid model number 'yAc2!s9*J%(nF*]AB-7\%}V1,/]t|m9`@](!14:%f' 00:17:52.823 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:52.823 { 00:17:52.823 "nqn": "nqn.2016-06.io.spdk:cnode28375", 00:17:52.823 "model_number": "yAc2!s9*J%(nF*]AB-7\\%}V1,/]t|m9`@](!14:%f", 00:17:52.823 "method": "nvmf_create_subsystem", 00:17:52.823 "req_id": 1 00:17:52.823 } 00:17:52.823 Got JSON-RPC error response 00:17:52.823 response: 00:17:52.823 { 00:17:52.823 "code": -32602, 00:17:52.823 "message": "Invalid MN yAc2!s9*J%(nF*]AB-7\\%}V1,/]t|m9`@](!14:%f" 00:17:52.823 }' 00:17:52.823 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:52.823 { 00:17:52.823 "nqn": "nqn.2016-06.io.spdk:cnode28375", 00:17:52.823 "model_number": "yAc2!s9*J%(nF*]AB-7\\%}V1,/]t|m9`@](!14:%f", 00:17:52.823 "method": "nvmf_create_subsystem", 00:17:52.823 "req_id": 1 00:17:52.823 } 00:17:52.823 Got JSON-RPC error response 00:17:52.823 response: 00:17:52.823 { 00:17:52.823 "code": -32602, 00:17:52.823 "message": "Invalid MN yAc2!s9*J%(nF*]AB-7\\%}V1,/]t|m9`@](!14:%f" 00:17:52.823 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:52.823 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:17:53.082 [2024-12-15 16:05:21.411409] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f857f0/0x1f89ce0) succeed. 00:17:53.082 [2024-12-15 16:05:21.421716] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f86e30/0x1fcb380) succeed. 00:17:53.082 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:53.341 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:17:53.341 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:17:53.341 192.168.100.9' 00:17:53.341 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:53.341 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:17:53.341 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:17:53.601 [2024-12-15 16:05:21.946864] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:53.601 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:53.601 { 00:17:53.601 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.601 "listen_address": { 00:17:53.601 "trtype": "rdma", 00:17:53.601 "traddr": "192.168.100.8", 00:17:53.601 "trsvcid": "4421" 00:17:53.601 }, 00:17:53.601 "method": "nvmf_subsystem_remove_listener", 00:17:53.601 "req_id": 1 00:17:53.601 } 00:17:53.601 Got JSON-RPC error response 00:17:53.601 response: 00:17:53.601 { 00:17:53.601 "code": -32602, 00:17:53.601 "message": "Invalid parameters" 00:17:53.601 }' 00:17:53.601 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:53.601 { 00:17:53.601 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.601 "listen_address": { 00:17:53.601 "trtype": "rdma", 00:17:53.601 "traddr": "192.168.100.8", 00:17:53.601 "trsvcid": "4421" 00:17:53.601 }, 00:17:53.601 "method": "nvmf_subsystem_remove_listener", 00:17:53.601 "req_id": 1 00:17:53.601 } 00:17:53.601 Got JSON-RPC error response 00:17:53.601 response: 00:17:53.601 { 00:17:53.601 "code": -32602, 00:17:53.601 "message": "Invalid parameters" 00:17:53.601 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:53.601 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12700 -i 0 00:17:53.601 [2024-12-15 16:05:22.155556] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12700: invalid cntlid range [0-65519] 00:17:53.860 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:53.860 { 00:17:53.860 "nqn": "nqn.2016-06.io.spdk:cnode12700", 00:17:53.860 "min_cntlid": 0, 00:17:53.860 "method": "nvmf_create_subsystem", 00:17:53.860 "req_id": 1 00:17:53.860 } 00:17:53.860 Got JSON-RPC error response 00:17:53.860 response: 00:17:53.860 { 00:17:53.860 "code": -32602, 00:17:53.860 "message": "Invalid cntlid range [0-65519]" 00:17:53.860 }' 00:17:53.860 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:53.860 { 00:17:53.860 "nqn": "nqn.2016-06.io.spdk:cnode12700", 00:17:53.860 "min_cntlid": 0, 00:17:53.860 "method": "nvmf_create_subsystem", 00:17:53.860 "req_id": 1 00:17:53.860 } 00:17:53.860 Got JSON-RPC error response 00:17:53.860 response: 00:17:53.860 { 00:17:53.860 "code": -32602, 00:17:53.860 "message": "Invalid cntlid range [0-65519]" 00:17:53.860 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:53.860 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10580 -i 65520 00:17:53.860 [2024-12-15 16:05:22.364301] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10580: invalid cntlid range [65520-65519] 00:17:53.861 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:53.861 { 00:17:53.861 "nqn": "nqn.2016-06.io.spdk:cnode10580", 00:17:53.861 "min_cntlid": 65520, 00:17:53.861 "method": "nvmf_create_subsystem", 00:17:53.861 "req_id": 1 00:17:53.861 } 00:17:53.861 Got JSON-RPC error response 00:17:53.861 response: 00:17:53.861 { 00:17:53.861 "code": -32602, 00:17:53.861 "message": "Invalid cntlid range [65520-65519]" 00:17:53.861 }' 00:17:53.861 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:53.861 { 00:17:53.861 "nqn": "nqn.2016-06.io.spdk:cnode10580", 00:17:53.861 "min_cntlid": 65520, 00:17:53.861 "method": "nvmf_create_subsystem", 00:17:53.861 "req_id": 1 00:17:53.861 } 00:17:53.861 Got JSON-RPC error response 00:17:53.861 response: 00:17:53.861 { 00:17:53.861 "code": -32602, 00:17:53.861 "message": "Invalid cntlid range [65520-65519]" 00:17:53.861 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:53.861 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31302 -I 0 00:17:54.120 [2024-12-15 16:05:22.573030] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31302: invalid cntlid range [1-0] 00:17:54.120 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:54.120 { 00:17:54.120 "nqn": "nqn.2016-06.io.spdk:cnode31302", 00:17:54.120 "max_cntlid": 0, 00:17:54.120 "method": "nvmf_create_subsystem", 00:17:54.120 "req_id": 1 00:17:54.120 } 00:17:54.120 Got JSON-RPC error response 00:17:54.120 response: 00:17:54.120 { 00:17:54.120 "code": -32602, 00:17:54.120 "message": "Invalid cntlid range [1-0]" 00:17:54.120 }' 00:17:54.120 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:54.120 { 00:17:54.120 "nqn": "nqn.2016-06.io.spdk:cnode31302", 00:17:54.120 "max_cntlid": 0, 00:17:54.120 "method": "nvmf_create_subsystem", 00:17:54.120 "req_id": 1 00:17:54.120 } 00:17:54.120 Got JSON-RPC error response 00:17:54.120 response: 00:17:54.120 { 00:17:54.120 "code": -32602, 00:17:54.120 "message": "Invalid cntlid range [1-0]" 00:17:54.120 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.120 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9854 -I 65520 00:17:54.379 [2024-12-15 16:05:22.785795] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9854: invalid cntlid range [1-65520] 00:17:54.379 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:54.379 { 00:17:54.379 "nqn": "nqn.2016-06.io.spdk:cnode9854", 00:17:54.379 "max_cntlid": 65520, 00:17:54.379 "method": "nvmf_create_subsystem", 00:17:54.379 "req_id": 1 00:17:54.379 } 00:17:54.379 Got JSON-RPC error response 00:17:54.379 response: 00:17:54.379 { 00:17:54.379 "code": -32602, 00:17:54.379 "message": "Invalid cntlid range [1-65520]" 00:17:54.379 }' 00:17:54.379 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:54.379 { 00:17:54.379 "nqn": "nqn.2016-06.io.spdk:cnode9854", 00:17:54.379 "max_cntlid": 65520, 00:17:54.379 "method": "nvmf_create_subsystem", 00:17:54.379 "req_id": 1 00:17:54.379 } 00:17:54.379 Got JSON-RPC error response 00:17:54.379 response: 00:17:54.379 { 00:17:54.379 "code": -32602, 00:17:54.379 "message": "Invalid cntlid range [1-65520]" 00:17:54.379 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.379 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9158 -i 6 -I 5 00:17:54.638 [2024-12-15 16:05:22.994534] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9158: invalid cntlid range [6-5] 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:54.638 { 00:17:54.638 "nqn": "nqn.2016-06.io.spdk:cnode9158", 00:17:54.638 "min_cntlid": 6, 00:17:54.638 "max_cntlid": 5, 00:17:54.638 "method": "nvmf_create_subsystem", 00:17:54.638 "req_id": 1 00:17:54.638 } 00:17:54.638 Got JSON-RPC error response 00:17:54.638 response: 00:17:54.638 { 00:17:54.638 "code": -32602, 00:17:54.638 "message": "Invalid cntlid range [6-5]" 00:17:54.638 }' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:54.638 { 00:17:54.638 "nqn": "nqn.2016-06.io.spdk:cnode9158", 00:17:54.638 "min_cntlid": 6, 00:17:54.638 "max_cntlid": 5, 00:17:54.638 "method": "nvmf_create_subsystem", 00:17:54.638 "req_id": 1 00:17:54.638 } 00:17:54.638 Got JSON-RPC error response 00:17:54.638 response: 00:17:54.638 { 00:17:54.638 "code": -32602, 00:17:54.638 "message": "Invalid cntlid range [6-5]" 00:17:54.638 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:54.638 { 00:17:54.638 "name": "foobar", 00:17:54.638 "method": "nvmf_delete_target", 00:17:54.638 "req_id": 1 00:17:54.638 } 00:17:54.638 Got JSON-RPC error response 00:17:54.638 response: 00:17:54.638 { 00:17:54.638 "code": -32602, 00:17:54.638 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:54.638 }' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:54.638 { 00:17:54.638 "name": "foobar", 00:17:54.638 "method": "nvmf_delete_target", 00:17:54.638 "req_id": 1 00:17:54.638 } 00:17:54.638 Got JSON-RPC error response 00:17:54.638 response: 00:17:54.638 { 00:17:54.638 "code": -32602, 00:17:54.638 "message": "The specified target doesn't exist, cannot delete it." 00:17:54.638 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:54.638 rmmod nvme_rdma 00:17:54.638 rmmod nvme_fabrics 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 2809138 ']' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 2809138 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2809138 ']' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2809138 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.638 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809138 00:17:54.898 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.898 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.898 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809138' 00:17:54.898 killing process with pid 2809138 00:17:54.898 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2809138 00:17:54.898 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2809138 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:55.158 00:17:55.158 real 0m11.015s 00:17:55.158 user 0m19.996s 00:17:55.158 sys 0m6.275s 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:55.158 ************************************ 00:17:55.158 END TEST nvmf_invalid 00:17:55.158 ************************************ 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.158 ************************************ 00:17:55.158 START TEST nvmf_connect_stress 00:17:55.158 ************************************ 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:55.158 * Looking for test storage... 00:17:55.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:55.158 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.418 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.419 --rc genhtml_branch_coverage=1 00:17:55.419 --rc genhtml_function_coverage=1 00:17:55.419 --rc genhtml_legend=1 00:17:55.419 --rc geninfo_all_blocks=1 00:17:55.419 --rc geninfo_unexecuted_blocks=1 00:17:55.419 00:17:55.419 ' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.419 --rc genhtml_branch_coverage=1 00:17:55.419 --rc genhtml_function_coverage=1 00:17:55.419 --rc genhtml_legend=1 00:17:55.419 --rc geninfo_all_blocks=1 00:17:55.419 --rc geninfo_unexecuted_blocks=1 00:17:55.419 00:17:55.419 ' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.419 --rc genhtml_branch_coverage=1 00:17:55.419 --rc genhtml_function_coverage=1 00:17:55.419 --rc genhtml_legend=1 00:17:55.419 --rc geninfo_all_blocks=1 00:17:55.419 --rc geninfo_unexecuted_blocks=1 00:17:55.419 00:17:55.419 ' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.419 --rc genhtml_branch_coverage=1 00:17:55.419 --rc genhtml_function_coverage=1 00:17:55.419 --rc genhtml_legend=1 00:17:55.419 --rc geninfo_all_blocks=1 00:17:55.419 --rc geninfo_unexecuted_blocks=1 00:17:55.419 00:17:55.419 ' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.419 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:55.419 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:55.420 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:55.420 16:05:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:01.995 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:01.995 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:01.995 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.995 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:01.996 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # rdma_device_init 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:01.996 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.996 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:01.996 altname enp217s0f0np0 00:18:01.996 altname ens818f0np0 00:18:01.996 inet 192.168.100.8/24 scope global mlx_0_0 00:18:01.996 valid_lft forever preferred_lft forever 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:01.996 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.996 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:01.996 altname enp217s0f1np1 00:18:01.996 altname ens818f1np1 00:18:01.996 inet 192.168.100.9/24 scope global mlx_0_1 00:18:01.996 valid_lft forever preferred_lft forever 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:01.996 192.168.100.9' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:01.996 192.168.100.9' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # head -n 1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:01.996 192.168.100.9' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # tail -n +2 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # head -n 1 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:01.996 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=2813308 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 2813308 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2813308 ']' 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.997 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.997 [2024-12-15 16:05:30.521286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:01.997 [2024-12-15 16:05:30.521333] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.256 [2024-12-15 16:05:30.592226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:02.256 [2024-12-15 16:05:30.631257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.256 [2024-12-15 16:05:30.631296] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.256 [2024-12-15 16:05:30.631305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.256 [2024-12-15 16:05:30.631314] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.256 [2024-12-15 16:05:30.631321] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.256 [2024-12-15 16:05:30.631421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.256 [2024-12-15 16:05:30.631505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.256 [2024-12-15 16:05:30.631507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 [2024-12-15 16:05:30.806587] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11425c0/0x1146ab0) succeed. 00:18:02.256 [2024-12-15 16:05:30.817348] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1143b60/0x1188150) succeed. 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 [2024-12-15 16:05:30.923915] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 NULL1 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2813337 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.516 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.516 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.517 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.086 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:03.086 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.086 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.086 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.345 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.345 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:03.345 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.345 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.345 16:05:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.605 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.605 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:03.605 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.605 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.605 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.864 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.864 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:03.864 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.864 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.864 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.433 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.433 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:04.433 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.433 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.433 16:05:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.692 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.692 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:04.692 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.692 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.692 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.952 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.952 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:04.952 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.952 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.952 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.211 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.211 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:05.211 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.211 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.211 16:05:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.471 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.471 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:05.471 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.471 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.471 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.039 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.039 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:06.039 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.039 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.039 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.298 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.298 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:06.298 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.298 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.298 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.558 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.558 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:06.558 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.558 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.558 16:05:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.817 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.817 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:06.817 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.817 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.817 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.383 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.383 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:07.383 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.383 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.383 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.642 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.642 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:07.642 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.642 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.642 16:05:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.901 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.901 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:07.901 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.901 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.901 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.161 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:08.161 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.161 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.161 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.430 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.430 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:08.430 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.430 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.430 16:05:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.746 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.746 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:08.746 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.746 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.746 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.336 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.336 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:09.336 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.336 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.336 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.595 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.595 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:09.595 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.595 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.595 16:05:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.855 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.855 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:09.855 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.855 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.855 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.114 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.114 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:10.114 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.114 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.114 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.374 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.374 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:10.374 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.374 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.374 16:05:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.943 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.943 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:10.943 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.943 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.943 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.202 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.202 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:11.202 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.202 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.202 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.461 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.461 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:11.461 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.461 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.461 16:05:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.721 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.721 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:11.721 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.721 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.721 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.290 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.290 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:12.290 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.290 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.290 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.549 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.549 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:12.549 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.549 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.549 16:05:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.549 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2813337 00:18:12.809 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2813337) - No such process 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2813337 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:12.809 rmmod nvme_rdma 00:18:12.809 rmmod nvme_fabrics 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 2813308 ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 2813308 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2813308 ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2813308 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2813308 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2813308' 00:18:12.809 killing process with pid 2813308 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2813308 00:18:12.809 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2813308 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:13.068 00:18:13.068 real 0m17.983s 00:18:13.068 user 0m39.707s 00:18:13.068 sys 0m8.011s 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 ************************************ 00:18:13.068 END TEST nvmf_connect_stress 00:18:13.068 ************************************ 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.068 16:05:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 ************************************ 00:18:13.329 START TEST nvmf_fused_ordering 00:18:13.329 ************************************ 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:13.329 * Looking for test storage... 00:18:13.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.329 --rc genhtml_branch_coverage=1 00:18:13.329 --rc genhtml_function_coverage=1 00:18:13.329 --rc genhtml_legend=1 00:18:13.329 --rc geninfo_all_blocks=1 00:18:13.329 --rc geninfo_unexecuted_blocks=1 00:18:13.329 00:18:13.329 ' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.329 --rc genhtml_branch_coverage=1 00:18:13.329 --rc genhtml_function_coverage=1 00:18:13.329 --rc genhtml_legend=1 00:18:13.329 --rc geninfo_all_blocks=1 00:18:13.329 --rc geninfo_unexecuted_blocks=1 00:18:13.329 00:18:13.329 ' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.329 --rc genhtml_branch_coverage=1 00:18:13.329 --rc genhtml_function_coverage=1 00:18:13.329 --rc genhtml_legend=1 00:18:13.329 --rc geninfo_all_blocks=1 00:18:13.329 --rc geninfo_unexecuted_blocks=1 00:18:13.329 00:18:13.329 ' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.329 --rc genhtml_branch_coverage=1 00:18:13.329 --rc genhtml_function_coverage=1 00:18:13.329 --rc genhtml_legend=1 00:18:13.329 --rc geninfo_all_blocks=1 00:18:13.329 --rc geninfo_unexecuted_blocks=1 00:18:13.329 00:18:13.329 ' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.329 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.588 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.588 16:05:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:20.163 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:20.163 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:20.163 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:20.163 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # rdma_device_init 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.163 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:20.164 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.164 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:20.164 altname enp217s0f0np0 00:18:20.164 altname ens818f0np0 00:18:20.164 inet 192.168.100.8/24 scope global mlx_0_0 00:18:20.164 valid_lft forever preferred_lft forever 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:20.164 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.164 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:20.164 altname enp217s0f1np1 00:18:20.164 altname ens818f1np1 00:18:20.164 inet 192.168.100.9/24 scope global mlx_0_1 00:18:20.164 valid_lft forever preferred_lft forever 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.164 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:20.424 192.168.100.9' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:20.424 192.168.100.9' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # head -n 1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:20.424 192.168.100.9' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # tail -n +2 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # head -n 1 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=2818472 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 2818472 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2818472 ']' 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.424 16:05:48 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.424 [2024-12-15 16:05:48.854358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:20.424 [2024-12-15 16:05:48.854405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.424 [2024-12-15 16:05:48.924357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.424 [2024-12-15 16:05:48.961917] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.424 [2024-12-15 16:05:48.961957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.424 [2024-12-15 16:05:48.961966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.424 [2024-12-15 16:05:48.961975] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.424 [2024-12-15 16:05:48.961982] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.424 [2024-12-15 16:05:48.962006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 [2024-12-15 16:05:49.113999] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf10ed0/0xf153c0) succeed. 00:18:20.684 [2024-12-15 16:05:49.123041] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf123d0/0xf56a60) succeed. 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 [2024-12-15 16:05:49.185294] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 NULL1 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.684 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:20.684 [2024-12-15 16:05:49.241118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:20.684 [2024-12-15 16:05:49.241162] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818653 ] 00:18:20.944 Attached to nqn.2016-06.io.spdk:cnode1 00:18:20.944 Namespace ID: 1 size: 1GB 00:18:20.944 fused_ordering(0) 00:18:20.944 fused_ordering(1) 00:18:20.944 fused_ordering(2) 00:18:20.944 fused_ordering(3) 00:18:20.944 fused_ordering(4) 00:18:20.944 fused_ordering(5) 00:18:20.944 fused_ordering(6) 00:18:20.944 fused_ordering(7) 00:18:20.944 fused_ordering(8) 00:18:20.944 fused_ordering(9) 00:18:20.944 fused_ordering(10) 00:18:20.944 fused_ordering(11) 00:18:20.944 fused_ordering(12) 00:18:20.945 fused_ordering(13) 00:18:20.945 fused_ordering(14) 00:18:20.945 fused_ordering(15) 00:18:20.945 fused_ordering(16) 00:18:20.945 fused_ordering(17) 00:18:20.945 fused_ordering(18) 00:18:20.945 fused_ordering(19) 00:18:20.945 fused_ordering(20) 00:18:20.945 fused_ordering(21) 00:18:20.945 fused_ordering(22) 00:18:20.945 fused_ordering(23) 00:18:20.945 fused_ordering(24) 00:18:20.945 fused_ordering(25) 00:18:20.945 fused_ordering(26) 00:18:20.945 fused_ordering(27) 00:18:20.945 fused_ordering(28) 00:18:20.945 fused_ordering(29) 00:18:20.945 fused_ordering(30) 00:18:20.945 fused_ordering(31) 00:18:20.945 fused_ordering(32) 00:18:20.945 fused_ordering(33) 00:18:20.945 fused_ordering(34) 00:18:20.945 fused_ordering(35) 00:18:20.945 fused_ordering(36) 00:18:20.945 fused_ordering(37) 00:18:20.945 fused_ordering(38) 00:18:20.945 fused_ordering(39) 00:18:20.945 fused_ordering(40) 00:18:20.945 fused_ordering(41) 00:18:20.945 fused_ordering(42) 00:18:20.945 fused_ordering(43) 00:18:20.945 fused_ordering(44) 00:18:20.945 fused_ordering(45) 00:18:20.945 fused_ordering(46) 00:18:20.945 fused_ordering(47) 00:18:20.945 fused_ordering(48) 00:18:20.945 fused_ordering(49) 00:18:20.945 fused_ordering(50) 00:18:20.945 fused_ordering(51) 00:18:20.945 fused_ordering(52) 00:18:20.945 fused_ordering(53) 00:18:20.945 fused_ordering(54) 00:18:20.945 fused_ordering(55) 00:18:20.945 fused_ordering(56) 00:18:20.945 fused_ordering(57) 00:18:20.945 fused_ordering(58) 00:18:20.945 fused_ordering(59) 00:18:20.945 fused_ordering(60) 00:18:20.945 fused_ordering(61) 00:18:20.945 fused_ordering(62) 00:18:20.945 fused_ordering(63) 00:18:20.945 fused_ordering(64) 00:18:20.945 fused_ordering(65) 00:18:20.945 fused_ordering(66) 00:18:20.945 fused_ordering(67) 00:18:20.945 fused_ordering(68) 00:18:20.945 fused_ordering(69) 00:18:20.945 fused_ordering(70) 00:18:20.945 fused_ordering(71) 00:18:20.945 fused_ordering(72) 00:18:20.945 fused_ordering(73) 00:18:20.945 fused_ordering(74) 00:18:20.945 fused_ordering(75) 00:18:20.945 fused_ordering(76) 00:18:20.945 fused_ordering(77) 00:18:20.945 fused_ordering(78) 00:18:20.945 fused_ordering(79) 00:18:20.945 fused_ordering(80) 00:18:20.945 fused_ordering(81) 00:18:20.945 fused_ordering(82) 00:18:20.945 fused_ordering(83) 00:18:20.945 fused_ordering(84) 00:18:20.945 fused_ordering(85) 00:18:20.945 fused_ordering(86) 00:18:20.945 fused_ordering(87) 00:18:20.945 fused_ordering(88) 00:18:20.945 fused_ordering(89) 00:18:20.945 fused_ordering(90) 00:18:20.945 fused_ordering(91) 00:18:20.945 fused_ordering(92) 00:18:20.945 fused_ordering(93) 00:18:20.945 fused_ordering(94) 00:18:20.945 fused_ordering(95) 00:18:20.945 fused_ordering(96) 00:18:20.945 fused_ordering(97) 00:18:20.945 fused_ordering(98) 00:18:20.945 fused_ordering(99) 00:18:20.945 fused_ordering(100) 00:18:20.945 fused_ordering(101) 00:18:20.945 fused_ordering(102) 00:18:20.945 fused_ordering(103) 00:18:20.945 fused_ordering(104) 00:18:20.945 fused_ordering(105) 00:18:20.945 fused_ordering(106) 00:18:20.945 fused_ordering(107) 00:18:20.945 fused_ordering(108) 00:18:20.945 fused_ordering(109) 00:18:20.945 fused_ordering(110) 00:18:20.945 fused_ordering(111) 00:18:20.945 fused_ordering(112) 00:18:20.945 fused_ordering(113) 00:18:20.945 fused_ordering(114) 00:18:20.945 fused_ordering(115) 00:18:20.945 fused_ordering(116) 00:18:20.945 fused_ordering(117) 00:18:20.945 fused_ordering(118) 00:18:20.945 fused_ordering(119) 00:18:20.945 fused_ordering(120) 00:18:20.945 fused_ordering(121) 00:18:20.945 fused_ordering(122) 00:18:20.945 fused_ordering(123) 00:18:20.945 fused_ordering(124) 00:18:20.945 fused_ordering(125) 00:18:20.945 fused_ordering(126) 00:18:20.945 fused_ordering(127) 00:18:20.945 fused_ordering(128) 00:18:20.945 fused_ordering(129) 00:18:20.945 fused_ordering(130) 00:18:20.945 fused_ordering(131) 00:18:20.945 fused_ordering(132) 00:18:20.945 fused_ordering(133) 00:18:20.945 fused_ordering(134) 00:18:20.945 fused_ordering(135) 00:18:20.945 fused_ordering(136) 00:18:20.945 fused_ordering(137) 00:18:20.945 fused_ordering(138) 00:18:20.945 fused_ordering(139) 00:18:20.945 fused_ordering(140) 00:18:20.945 fused_ordering(141) 00:18:20.945 fused_ordering(142) 00:18:20.945 fused_ordering(143) 00:18:20.945 fused_ordering(144) 00:18:20.945 fused_ordering(145) 00:18:20.945 fused_ordering(146) 00:18:20.945 fused_ordering(147) 00:18:20.945 fused_ordering(148) 00:18:20.945 fused_ordering(149) 00:18:20.945 fused_ordering(150) 00:18:20.945 fused_ordering(151) 00:18:20.945 fused_ordering(152) 00:18:20.945 fused_ordering(153) 00:18:20.945 fused_ordering(154) 00:18:20.945 fused_ordering(155) 00:18:20.945 fused_ordering(156) 00:18:20.945 fused_ordering(157) 00:18:20.945 fused_ordering(158) 00:18:20.945 fused_ordering(159) 00:18:20.945 fused_ordering(160) 00:18:20.945 fused_ordering(161) 00:18:20.945 fused_ordering(162) 00:18:20.945 fused_ordering(163) 00:18:20.945 fused_ordering(164) 00:18:20.945 fused_ordering(165) 00:18:20.945 fused_ordering(166) 00:18:20.945 fused_ordering(167) 00:18:20.945 fused_ordering(168) 00:18:20.945 fused_ordering(169) 00:18:20.945 fused_ordering(170) 00:18:20.945 fused_ordering(171) 00:18:20.945 fused_ordering(172) 00:18:20.945 fused_ordering(173) 00:18:20.945 fused_ordering(174) 00:18:20.945 fused_ordering(175) 00:18:20.945 fused_ordering(176) 00:18:20.945 fused_ordering(177) 00:18:20.945 fused_ordering(178) 00:18:20.945 fused_ordering(179) 00:18:20.945 fused_ordering(180) 00:18:20.945 fused_ordering(181) 00:18:20.945 fused_ordering(182) 00:18:20.945 fused_ordering(183) 00:18:20.945 fused_ordering(184) 00:18:20.945 fused_ordering(185) 00:18:20.945 fused_ordering(186) 00:18:20.945 fused_ordering(187) 00:18:20.945 fused_ordering(188) 00:18:20.945 fused_ordering(189) 00:18:20.945 fused_ordering(190) 00:18:20.945 fused_ordering(191) 00:18:20.945 fused_ordering(192) 00:18:20.945 fused_ordering(193) 00:18:20.945 fused_ordering(194) 00:18:20.945 fused_ordering(195) 00:18:20.945 fused_ordering(196) 00:18:20.945 fused_ordering(197) 00:18:20.945 fused_ordering(198) 00:18:20.945 fused_ordering(199) 00:18:20.945 fused_ordering(200) 00:18:20.945 fused_ordering(201) 00:18:20.945 fused_ordering(202) 00:18:20.945 fused_ordering(203) 00:18:20.945 fused_ordering(204) 00:18:20.945 fused_ordering(205) 00:18:20.945 fused_ordering(206) 00:18:20.945 fused_ordering(207) 00:18:20.945 fused_ordering(208) 00:18:20.945 fused_ordering(209) 00:18:20.945 fused_ordering(210) 00:18:20.945 fused_ordering(211) 00:18:20.945 fused_ordering(212) 00:18:20.945 fused_ordering(213) 00:18:20.945 fused_ordering(214) 00:18:20.945 fused_ordering(215) 00:18:20.945 fused_ordering(216) 00:18:20.945 fused_ordering(217) 00:18:20.945 fused_ordering(218) 00:18:20.945 fused_ordering(219) 00:18:20.945 fused_ordering(220) 00:18:20.945 fused_ordering(221) 00:18:20.945 fused_ordering(222) 00:18:20.945 fused_ordering(223) 00:18:20.945 fused_ordering(224) 00:18:20.945 fused_ordering(225) 00:18:20.945 fused_ordering(226) 00:18:20.945 fused_ordering(227) 00:18:20.945 fused_ordering(228) 00:18:20.945 fused_ordering(229) 00:18:20.945 fused_ordering(230) 00:18:20.945 fused_ordering(231) 00:18:20.945 fused_ordering(232) 00:18:20.945 fused_ordering(233) 00:18:20.945 fused_ordering(234) 00:18:20.945 fused_ordering(235) 00:18:20.945 fused_ordering(236) 00:18:20.945 fused_ordering(237) 00:18:20.945 fused_ordering(238) 00:18:20.945 fused_ordering(239) 00:18:20.945 fused_ordering(240) 00:18:20.945 fused_ordering(241) 00:18:20.945 fused_ordering(242) 00:18:20.945 fused_ordering(243) 00:18:20.945 fused_ordering(244) 00:18:20.945 fused_ordering(245) 00:18:20.945 fused_ordering(246) 00:18:20.945 fused_ordering(247) 00:18:20.945 fused_ordering(248) 00:18:20.945 fused_ordering(249) 00:18:20.945 fused_ordering(250) 00:18:20.945 fused_ordering(251) 00:18:20.945 fused_ordering(252) 00:18:20.945 fused_ordering(253) 00:18:20.945 fused_ordering(254) 00:18:20.945 fused_ordering(255) 00:18:20.945 fused_ordering(256) 00:18:20.945 fused_ordering(257) 00:18:20.945 fused_ordering(258) 00:18:20.945 fused_ordering(259) 00:18:20.945 fused_ordering(260) 00:18:20.945 fused_ordering(261) 00:18:20.945 fused_ordering(262) 00:18:20.946 fused_ordering(263) 00:18:20.946 fused_ordering(264) 00:18:20.946 fused_ordering(265) 00:18:20.946 fused_ordering(266) 00:18:20.946 fused_ordering(267) 00:18:20.946 fused_ordering(268) 00:18:20.946 fused_ordering(269) 00:18:20.946 fused_ordering(270) 00:18:20.946 fused_ordering(271) 00:18:20.946 fused_ordering(272) 00:18:20.946 fused_ordering(273) 00:18:20.946 fused_ordering(274) 00:18:20.946 fused_ordering(275) 00:18:20.946 fused_ordering(276) 00:18:20.946 fused_ordering(277) 00:18:20.946 fused_ordering(278) 00:18:20.946 fused_ordering(279) 00:18:20.946 fused_ordering(280) 00:18:20.946 fused_ordering(281) 00:18:20.946 fused_ordering(282) 00:18:20.946 fused_ordering(283) 00:18:20.946 fused_ordering(284) 00:18:20.946 fused_ordering(285) 00:18:20.946 fused_ordering(286) 00:18:20.946 fused_ordering(287) 00:18:20.946 fused_ordering(288) 00:18:20.946 fused_ordering(289) 00:18:20.946 fused_ordering(290) 00:18:20.946 fused_ordering(291) 00:18:20.946 fused_ordering(292) 00:18:20.946 fused_ordering(293) 00:18:20.946 fused_ordering(294) 00:18:20.946 fused_ordering(295) 00:18:20.946 fused_ordering(296) 00:18:20.946 fused_ordering(297) 00:18:20.946 fused_ordering(298) 00:18:20.946 fused_ordering(299) 00:18:20.946 fused_ordering(300) 00:18:20.946 fused_ordering(301) 00:18:20.946 fused_ordering(302) 00:18:20.946 fused_ordering(303) 00:18:20.946 fused_ordering(304) 00:18:20.946 fused_ordering(305) 00:18:20.946 fused_ordering(306) 00:18:20.946 fused_ordering(307) 00:18:20.946 fused_ordering(308) 00:18:20.946 fused_ordering(309) 00:18:20.946 fused_ordering(310) 00:18:20.946 fused_ordering(311) 00:18:20.946 fused_ordering(312) 00:18:20.946 fused_ordering(313) 00:18:20.946 fused_ordering(314) 00:18:20.946 fused_ordering(315) 00:18:20.946 fused_ordering(316) 00:18:20.946 fused_ordering(317) 00:18:20.946 fused_ordering(318) 00:18:20.946 fused_ordering(319) 00:18:20.946 fused_ordering(320) 00:18:20.946 fused_ordering(321) 00:18:20.946 fused_ordering(322) 00:18:20.946 fused_ordering(323) 00:18:20.946 fused_ordering(324) 00:18:20.946 fused_ordering(325) 00:18:20.946 fused_ordering(326) 00:18:20.946 fused_ordering(327) 00:18:20.946 fused_ordering(328) 00:18:20.946 fused_ordering(329) 00:18:20.946 fused_ordering(330) 00:18:20.946 fused_ordering(331) 00:18:20.946 fused_ordering(332) 00:18:20.946 fused_ordering(333) 00:18:20.946 fused_ordering(334) 00:18:20.946 fused_ordering(335) 00:18:20.946 fused_ordering(336) 00:18:20.946 fused_ordering(337) 00:18:20.946 fused_ordering(338) 00:18:20.946 fused_ordering(339) 00:18:20.946 fused_ordering(340) 00:18:20.946 fused_ordering(341) 00:18:20.946 fused_ordering(342) 00:18:20.946 fused_ordering(343) 00:18:20.946 fused_ordering(344) 00:18:20.946 fused_ordering(345) 00:18:20.946 fused_ordering(346) 00:18:20.946 fused_ordering(347) 00:18:20.946 fused_ordering(348) 00:18:20.946 fused_ordering(349) 00:18:20.946 fused_ordering(350) 00:18:20.946 fused_ordering(351) 00:18:20.946 fused_ordering(352) 00:18:20.946 fused_ordering(353) 00:18:20.946 fused_ordering(354) 00:18:20.946 fused_ordering(355) 00:18:20.946 fused_ordering(356) 00:18:20.946 fused_ordering(357) 00:18:20.946 fused_ordering(358) 00:18:20.946 fused_ordering(359) 00:18:20.946 fused_ordering(360) 00:18:20.946 fused_ordering(361) 00:18:20.946 fused_ordering(362) 00:18:20.946 fused_ordering(363) 00:18:20.946 fused_ordering(364) 00:18:20.946 fused_ordering(365) 00:18:20.946 fused_ordering(366) 00:18:20.946 fused_ordering(367) 00:18:20.946 fused_ordering(368) 00:18:20.946 fused_ordering(369) 00:18:20.946 fused_ordering(370) 00:18:20.946 fused_ordering(371) 00:18:20.946 fused_ordering(372) 00:18:20.946 fused_ordering(373) 00:18:20.946 fused_ordering(374) 00:18:20.946 fused_ordering(375) 00:18:20.946 fused_ordering(376) 00:18:20.946 fused_ordering(377) 00:18:20.946 fused_ordering(378) 00:18:20.946 fused_ordering(379) 00:18:20.946 fused_ordering(380) 00:18:20.946 fused_ordering(381) 00:18:20.946 fused_ordering(382) 00:18:20.946 fused_ordering(383) 00:18:20.946 fused_ordering(384) 00:18:20.946 fused_ordering(385) 00:18:20.946 fused_ordering(386) 00:18:20.946 fused_ordering(387) 00:18:20.946 fused_ordering(388) 00:18:20.946 fused_ordering(389) 00:18:20.946 fused_ordering(390) 00:18:20.946 fused_ordering(391) 00:18:20.946 fused_ordering(392) 00:18:20.946 fused_ordering(393) 00:18:20.946 fused_ordering(394) 00:18:20.946 fused_ordering(395) 00:18:20.946 fused_ordering(396) 00:18:20.946 fused_ordering(397) 00:18:20.946 fused_ordering(398) 00:18:20.946 fused_ordering(399) 00:18:20.946 fused_ordering(400) 00:18:20.946 fused_ordering(401) 00:18:20.946 fused_ordering(402) 00:18:20.946 fused_ordering(403) 00:18:20.946 fused_ordering(404) 00:18:20.946 fused_ordering(405) 00:18:20.946 fused_ordering(406) 00:18:20.946 fused_ordering(407) 00:18:20.946 fused_ordering(408) 00:18:20.946 fused_ordering(409) 00:18:20.946 fused_ordering(410) 00:18:21.206 fused_ordering(411) 00:18:21.206 fused_ordering(412) 00:18:21.206 fused_ordering(413) 00:18:21.206 fused_ordering(414) 00:18:21.206 fused_ordering(415) 00:18:21.206 fused_ordering(416) 00:18:21.206 fused_ordering(417) 00:18:21.206 fused_ordering(418) 00:18:21.206 fused_ordering(419) 00:18:21.206 fused_ordering(420) 00:18:21.206 fused_ordering(421) 00:18:21.206 fused_ordering(422) 00:18:21.206 fused_ordering(423) 00:18:21.206 fused_ordering(424) 00:18:21.206 fused_ordering(425) 00:18:21.206 fused_ordering(426) 00:18:21.206 fused_ordering(427) 00:18:21.206 fused_ordering(428) 00:18:21.206 fused_ordering(429) 00:18:21.206 fused_ordering(430) 00:18:21.206 fused_ordering(431) 00:18:21.206 fused_ordering(432) 00:18:21.206 fused_ordering(433) 00:18:21.206 fused_ordering(434) 00:18:21.206 fused_ordering(435) 00:18:21.206 fused_ordering(436) 00:18:21.206 fused_ordering(437) 00:18:21.206 fused_ordering(438) 00:18:21.206 fused_ordering(439) 00:18:21.206 fused_ordering(440) 00:18:21.206 fused_ordering(441) 00:18:21.206 fused_ordering(442) 00:18:21.206 fused_ordering(443) 00:18:21.206 fused_ordering(444) 00:18:21.206 fused_ordering(445) 00:18:21.206 fused_ordering(446) 00:18:21.206 fused_ordering(447) 00:18:21.206 fused_ordering(448) 00:18:21.206 fused_ordering(449) 00:18:21.206 fused_ordering(450) 00:18:21.206 fused_ordering(451) 00:18:21.206 fused_ordering(452) 00:18:21.206 fused_ordering(453) 00:18:21.206 fused_ordering(454) 00:18:21.206 fused_ordering(455) 00:18:21.206 fused_ordering(456) 00:18:21.206 fused_ordering(457) 00:18:21.206 fused_ordering(458) 00:18:21.206 fused_ordering(459) 00:18:21.206 fused_ordering(460) 00:18:21.206 fused_ordering(461) 00:18:21.206 fused_ordering(462) 00:18:21.206 fused_ordering(463) 00:18:21.206 fused_ordering(464) 00:18:21.206 fused_ordering(465) 00:18:21.206 fused_ordering(466) 00:18:21.206 fused_ordering(467) 00:18:21.206 fused_ordering(468) 00:18:21.206 fused_ordering(469) 00:18:21.206 fused_ordering(470) 00:18:21.206 fused_ordering(471) 00:18:21.206 fused_ordering(472) 00:18:21.206 fused_ordering(473) 00:18:21.206 fused_ordering(474) 00:18:21.206 fused_ordering(475) 00:18:21.206 fused_ordering(476) 00:18:21.206 fused_ordering(477) 00:18:21.206 fused_ordering(478) 00:18:21.206 fused_ordering(479) 00:18:21.206 fused_ordering(480) 00:18:21.206 fused_ordering(481) 00:18:21.206 fused_ordering(482) 00:18:21.206 fused_ordering(483) 00:18:21.206 fused_ordering(484) 00:18:21.206 fused_ordering(485) 00:18:21.206 fused_ordering(486) 00:18:21.206 fused_ordering(487) 00:18:21.206 fused_ordering(488) 00:18:21.206 fused_ordering(489) 00:18:21.206 fused_ordering(490) 00:18:21.206 fused_ordering(491) 00:18:21.206 fused_ordering(492) 00:18:21.206 fused_ordering(493) 00:18:21.206 fused_ordering(494) 00:18:21.206 fused_ordering(495) 00:18:21.206 fused_ordering(496) 00:18:21.206 fused_ordering(497) 00:18:21.206 fused_ordering(498) 00:18:21.206 fused_ordering(499) 00:18:21.206 fused_ordering(500) 00:18:21.206 fused_ordering(501) 00:18:21.206 fused_ordering(502) 00:18:21.206 fused_ordering(503) 00:18:21.206 fused_ordering(504) 00:18:21.206 fused_ordering(505) 00:18:21.206 fused_ordering(506) 00:18:21.206 fused_ordering(507) 00:18:21.206 fused_ordering(508) 00:18:21.206 fused_ordering(509) 00:18:21.206 fused_ordering(510) 00:18:21.206 fused_ordering(511) 00:18:21.206 fused_ordering(512) 00:18:21.206 fused_ordering(513) 00:18:21.206 fused_ordering(514) 00:18:21.206 fused_ordering(515) 00:18:21.206 fused_ordering(516) 00:18:21.206 fused_ordering(517) 00:18:21.206 fused_ordering(518) 00:18:21.206 fused_ordering(519) 00:18:21.206 fused_ordering(520) 00:18:21.206 fused_ordering(521) 00:18:21.206 fused_ordering(522) 00:18:21.206 fused_ordering(523) 00:18:21.206 fused_ordering(524) 00:18:21.206 fused_ordering(525) 00:18:21.206 fused_ordering(526) 00:18:21.206 fused_ordering(527) 00:18:21.206 fused_ordering(528) 00:18:21.206 fused_ordering(529) 00:18:21.206 fused_ordering(530) 00:18:21.206 fused_ordering(531) 00:18:21.206 fused_ordering(532) 00:18:21.206 fused_ordering(533) 00:18:21.206 fused_ordering(534) 00:18:21.206 fused_ordering(535) 00:18:21.206 fused_ordering(536) 00:18:21.206 fused_ordering(537) 00:18:21.206 fused_ordering(538) 00:18:21.206 fused_ordering(539) 00:18:21.206 fused_ordering(540) 00:18:21.206 fused_ordering(541) 00:18:21.206 fused_ordering(542) 00:18:21.206 fused_ordering(543) 00:18:21.206 fused_ordering(544) 00:18:21.206 fused_ordering(545) 00:18:21.206 fused_ordering(546) 00:18:21.206 fused_ordering(547) 00:18:21.206 fused_ordering(548) 00:18:21.206 fused_ordering(549) 00:18:21.206 fused_ordering(550) 00:18:21.206 fused_ordering(551) 00:18:21.206 fused_ordering(552) 00:18:21.206 fused_ordering(553) 00:18:21.206 fused_ordering(554) 00:18:21.206 fused_ordering(555) 00:18:21.206 fused_ordering(556) 00:18:21.206 fused_ordering(557) 00:18:21.206 fused_ordering(558) 00:18:21.206 fused_ordering(559) 00:18:21.206 fused_ordering(560) 00:18:21.206 fused_ordering(561) 00:18:21.206 fused_ordering(562) 00:18:21.206 fused_ordering(563) 00:18:21.206 fused_ordering(564) 00:18:21.206 fused_ordering(565) 00:18:21.206 fused_ordering(566) 00:18:21.206 fused_ordering(567) 00:18:21.206 fused_ordering(568) 00:18:21.206 fused_ordering(569) 00:18:21.206 fused_ordering(570) 00:18:21.206 fused_ordering(571) 00:18:21.206 fused_ordering(572) 00:18:21.206 fused_ordering(573) 00:18:21.206 fused_ordering(574) 00:18:21.206 fused_ordering(575) 00:18:21.206 fused_ordering(576) 00:18:21.206 fused_ordering(577) 00:18:21.206 fused_ordering(578) 00:18:21.206 fused_ordering(579) 00:18:21.206 fused_ordering(580) 00:18:21.206 fused_ordering(581) 00:18:21.206 fused_ordering(582) 00:18:21.206 fused_ordering(583) 00:18:21.206 fused_ordering(584) 00:18:21.206 fused_ordering(585) 00:18:21.206 fused_ordering(586) 00:18:21.206 fused_ordering(587) 00:18:21.206 fused_ordering(588) 00:18:21.206 fused_ordering(589) 00:18:21.206 fused_ordering(590) 00:18:21.206 fused_ordering(591) 00:18:21.206 fused_ordering(592) 00:18:21.206 fused_ordering(593) 00:18:21.206 fused_ordering(594) 00:18:21.206 fused_ordering(595) 00:18:21.206 fused_ordering(596) 00:18:21.206 fused_ordering(597) 00:18:21.206 fused_ordering(598) 00:18:21.206 fused_ordering(599) 00:18:21.206 fused_ordering(600) 00:18:21.206 fused_ordering(601) 00:18:21.206 fused_ordering(602) 00:18:21.206 fused_ordering(603) 00:18:21.206 fused_ordering(604) 00:18:21.206 fused_ordering(605) 00:18:21.206 fused_ordering(606) 00:18:21.206 fused_ordering(607) 00:18:21.206 fused_ordering(608) 00:18:21.206 fused_ordering(609) 00:18:21.206 fused_ordering(610) 00:18:21.206 fused_ordering(611) 00:18:21.206 fused_ordering(612) 00:18:21.206 fused_ordering(613) 00:18:21.206 fused_ordering(614) 00:18:21.206 fused_ordering(615) 00:18:21.206 fused_ordering(616) 00:18:21.206 fused_ordering(617) 00:18:21.206 fused_ordering(618) 00:18:21.206 fused_ordering(619) 00:18:21.206 fused_ordering(620) 00:18:21.206 fused_ordering(621) 00:18:21.206 fused_ordering(622) 00:18:21.206 fused_ordering(623) 00:18:21.206 fused_ordering(624) 00:18:21.206 fused_ordering(625) 00:18:21.206 fused_ordering(626) 00:18:21.206 fused_ordering(627) 00:18:21.206 fused_ordering(628) 00:18:21.206 fused_ordering(629) 00:18:21.206 fused_ordering(630) 00:18:21.206 fused_ordering(631) 00:18:21.206 fused_ordering(632) 00:18:21.206 fused_ordering(633) 00:18:21.206 fused_ordering(634) 00:18:21.206 fused_ordering(635) 00:18:21.206 fused_ordering(636) 00:18:21.206 fused_ordering(637) 00:18:21.206 fused_ordering(638) 00:18:21.206 fused_ordering(639) 00:18:21.206 fused_ordering(640) 00:18:21.206 fused_ordering(641) 00:18:21.206 fused_ordering(642) 00:18:21.206 fused_ordering(643) 00:18:21.206 fused_ordering(644) 00:18:21.206 fused_ordering(645) 00:18:21.206 fused_ordering(646) 00:18:21.206 fused_ordering(647) 00:18:21.206 fused_ordering(648) 00:18:21.206 fused_ordering(649) 00:18:21.206 fused_ordering(650) 00:18:21.206 fused_ordering(651) 00:18:21.206 fused_ordering(652) 00:18:21.206 fused_ordering(653) 00:18:21.206 fused_ordering(654) 00:18:21.206 fused_ordering(655) 00:18:21.206 fused_ordering(656) 00:18:21.206 fused_ordering(657) 00:18:21.206 fused_ordering(658) 00:18:21.206 fused_ordering(659) 00:18:21.206 fused_ordering(660) 00:18:21.206 fused_ordering(661) 00:18:21.206 fused_ordering(662) 00:18:21.206 fused_ordering(663) 00:18:21.206 fused_ordering(664) 00:18:21.206 fused_ordering(665) 00:18:21.206 fused_ordering(666) 00:18:21.206 fused_ordering(667) 00:18:21.206 fused_ordering(668) 00:18:21.206 fused_ordering(669) 00:18:21.206 fused_ordering(670) 00:18:21.206 fused_ordering(671) 00:18:21.206 fused_ordering(672) 00:18:21.206 fused_ordering(673) 00:18:21.206 fused_ordering(674) 00:18:21.206 fused_ordering(675) 00:18:21.206 fused_ordering(676) 00:18:21.206 fused_ordering(677) 00:18:21.206 fused_ordering(678) 00:18:21.206 fused_ordering(679) 00:18:21.206 fused_ordering(680) 00:18:21.206 fused_ordering(681) 00:18:21.206 fused_ordering(682) 00:18:21.206 fused_ordering(683) 00:18:21.206 fused_ordering(684) 00:18:21.206 fused_ordering(685) 00:18:21.206 fused_ordering(686) 00:18:21.206 fused_ordering(687) 00:18:21.206 fused_ordering(688) 00:18:21.206 fused_ordering(689) 00:18:21.206 fused_ordering(690) 00:18:21.206 fused_ordering(691) 00:18:21.206 fused_ordering(692) 00:18:21.206 fused_ordering(693) 00:18:21.206 fused_ordering(694) 00:18:21.206 fused_ordering(695) 00:18:21.206 fused_ordering(696) 00:18:21.206 fused_ordering(697) 00:18:21.206 fused_ordering(698) 00:18:21.206 fused_ordering(699) 00:18:21.206 fused_ordering(700) 00:18:21.206 fused_ordering(701) 00:18:21.206 fused_ordering(702) 00:18:21.206 fused_ordering(703) 00:18:21.206 fused_ordering(704) 00:18:21.206 fused_ordering(705) 00:18:21.206 fused_ordering(706) 00:18:21.206 fused_ordering(707) 00:18:21.206 fused_ordering(708) 00:18:21.206 fused_ordering(709) 00:18:21.206 fused_ordering(710) 00:18:21.206 fused_ordering(711) 00:18:21.206 fused_ordering(712) 00:18:21.206 fused_ordering(713) 00:18:21.206 fused_ordering(714) 00:18:21.206 fused_ordering(715) 00:18:21.206 fused_ordering(716) 00:18:21.206 fused_ordering(717) 00:18:21.206 fused_ordering(718) 00:18:21.206 fused_ordering(719) 00:18:21.206 fused_ordering(720) 00:18:21.206 fused_ordering(721) 00:18:21.206 fused_ordering(722) 00:18:21.206 fused_ordering(723) 00:18:21.206 fused_ordering(724) 00:18:21.206 fused_ordering(725) 00:18:21.206 fused_ordering(726) 00:18:21.206 fused_ordering(727) 00:18:21.206 fused_ordering(728) 00:18:21.206 fused_ordering(729) 00:18:21.206 fused_ordering(730) 00:18:21.206 fused_ordering(731) 00:18:21.206 fused_ordering(732) 00:18:21.206 fused_ordering(733) 00:18:21.206 fused_ordering(734) 00:18:21.206 fused_ordering(735) 00:18:21.206 fused_ordering(736) 00:18:21.206 fused_ordering(737) 00:18:21.206 fused_ordering(738) 00:18:21.206 fused_ordering(739) 00:18:21.206 fused_ordering(740) 00:18:21.206 fused_ordering(741) 00:18:21.206 fused_ordering(742) 00:18:21.206 fused_ordering(743) 00:18:21.206 fused_ordering(744) 00:18:21.206 fused_ordering(745) 00:18:21.206 fused_ordering(746) 00:18:21.206 fused_ordering(747) 00:18:21.206 fused_ordering(748) 00:18:21.206 fused_ordering(749) 00:18:21.206 fused_ordering(750) 00:18:21.206 fused_ordering(751) 00:18:21.206 fused_ordering(752) 00:18:21.206 fused_ordering(753) 00:18:21.206 fused_ordering(754) 00:18:21.206 fused_ordering(755) 00:18:21.206 fused_ordering(756) 00:18:21.206 fused_ordering(757) 00:18:21.206 fused_ordering(758) 00:18:21.206 fused_ordering(759) 00:18:21.206 fused_ordering(760) 00:18:21.206 fused_ordering(761) 00:18:21.206 fused_ordering(762) 00:18:21.207 fused_ordering(763) 00:18:21.207 fused_ordering(764) 00:18:21.207 fused_ordering(765) 00:18:21.207 fused_ordering(766) 00:18:21.207 fused_ordering(767) 00:18:21.207 fused_ordering(768) 00:18:21.207 fused_ordering(769) 00:18:21.207 fused_ordering(770) 00:18:21.207 fused_ordering(771) 00:18:21.207 fused_ordering(772) 00:18:21.207 fused_ordering(773) 00:18:21.207 fused_ordering(774) 00:18:21.207 fused_ordering(775) 00:18:21.207 fused_ordering(776) 00:18:21.207 fused_ordering(777) 00:18:21.207 fused_ordering(778) 00:18:21.207 fused_ordering(779) 00:18:21.207 fused_ordering(780) 00:18:21.207 fused_ordering(781) 00:18:21.207 fused_ordering(782) 00:18:21.207 fused_ordering(783) 00:18:21.207 fused_ordering(784) 00:18:21.207 fused_ordering(785) 00:18:21.207 fused_ordering(786) 00:18:21.207 fused_ordering(787) 00:18:21.207 fused_ordering(788) 00:18:21.207 fused_ordering(789) 00:18:21.207 fused_ordering(790) 00:18:21.207 fused_ordering(791) 00:18:21.207 fused_ordering(792) 00:18:21.207 fused_ordering(793) 00:18:21.207 fused_ordering(794) 00:18:21.207 fused_ordering(795) 00:18:21.207 fused_ordering(796) 00:18:21.207 fused_ordering(797) 00:18:21.207 fused_ordering(798) 00:18:21.207 fused_ordering(799) 00:18:21.207 fused_ordering(800) 00:18:21.207 fused_ordering(801) 00:18:21.207 fused_ordering(802) 00:18:21.207 fused_ordering(803) 00:18:21.207 fused_ordering(804) 00:18:21.207 fused_ordering(805) 00:18:21.207 fused_ordering(806) 00:18:21.207 fused_ordering(807) 00:18:21.207 fused_ordering(808) 00:18:21.207 fused_ordering(809) 00:18:21.207 fused_ordering(810) 00:18:21.207 fused_ordering(811) 00:18:21.207 fused_ordering(812) 00:18:21.207 fused_ordering(813) 00:18:21.207 fused_ordering(814) 00:18:21.207 fused_ordering(815) 00:18:21.207 fused_ordering(816) 00:18:21.207 fused_ordering(817) 00:18:21.207 fused_ordering(818) 00:18:21.207 fused_ordering(819) 00:18:21.207 fused_ordering(820) 00:18:21.467 fused_ordering(821) 00:18:21.467 fused_ordering(822) 00:18:21.467 fused_ordering(823) 00:18:21.467 fused_ordering(824) 00:18:21.467 fused_ordering(825) 00:18:21.467 fused_ordering(826) 00:18:21.467 fused_ordering(827) 00:18:21.467 fused_ordering(828) 00:18:21.467 fused_ordering(829) 00:18:21.467 fused_ordering(830) 00:18:21.467 fused_ordering(831) 00:18:21.467 fused_ordering(832) 00:18:21.467 fused_ordering(833) 00:18:21.467 fused_ordering(834) 00:18:21.467 fused_ordering(835) 00:18:21.467 fused_ordering(836) 00:18:21.467 fused_ordering(837) 00:18:21.467 fused_ordering(838) 00:18:21.467 fused_ordering(839) 00:18:21.467 fused_ordering(840) 00:18:21.467 fused_ordering(841) 00:18:21.467 fused_ordering(842) 00:18:21.467 fused_ordering(843) 00:18:21.467 fused_ordering(844) 00:18:21.467 fused_ordering(845) 00:18:21.467 fused_ordering(846) 00:18:21.467 fused_ordering(847) 00:18:21.467 fused_ordering(848) 00:18:21.467 fused_ordering(849) 00:18:21.467 fused_ordering(850) 00:18:21.467 fused_ordering(851) 00:18:21.467 fused_ordering(852) 00:18:21.467 fused_ordering(853) 00:18:21.467 fused_ordering(854) 00:18:21.467 fused_ordering(855) 00:18:21.467 fused_ordering(856) 00:18:21.467 fused_ordering(857) 00:18:21.467 fused_ordering(858) 00:18:21.467 fused_ordering(859) 00:18:21.467 fused_ordering(860) 00:18:21.467 fused_ordering(861) 00:18:21.467 fused_ordering(862) 00:18:21.467 fused_ordering(863) 00:18:21.467 fused_ordering(864) 00:18:21.467 fused_ordering(865) 00:18:21.467 fused_ordering(866) 00:18:21.467 fused_ordering(867) 00:18:21.467 fused_ordering(868) 00:18:21.467 fused_ordering(869) 00:18:21.467 fused_ordering(870) 00:18:21.467 fused_ordering(871) 00:18:21.467 fused_ordering(872) 00:18:21.467 fused_ordering(873) 00:18:21.467 fused_ordering(874) 00:18:21.467 fused_ordering(875) 00:18:21.467 fused_ordering(876) 00:18:21.467 fused_ordering(877) 00:18:21.467 fused_ordering(878) 00:18:21.467 fused_ordering(879) 00:18:21.467 fused_ordering(880) 00:18:21.467 fused_ordering(881) 00:18:21.467 fused_ordering(882) 00:18:21.467 fused_ordering(883) 00:18:21.467 fused_ordering(884) 00:18:21.467 fused_ordering(885) 00:18:21.467 fused_ordering(886) 00:18:21.467 fused_ordering(887) 00:18:21.467 fused_ordering(888) 00:18:21.467 fused_ordering(889) 00:18:21.467 fused_ordering(890) 00:18:21.467 fused_ordering(891) 00:18:21.467 fused_ordering(892) 00:18:21.467 fused_ordering(893) 00:18:21.467 fused_ordering(894) 00:18:21.467 fused_ordering(895) 00:18:21.467 fused_ordering(896) 00:18:21.467 fused_ordering(897) 00:18:21.467 fused_ordering(898) 00:18:21.467 fused_ordering(899) 00:18:21.467 fused_ordering(900) 00:18:21.467 fused_ordering(901) 00:18:21.467 fused_ordering(902) 00:18:21.467 fused_ordering(903) 00:18:21.467 fused_ordering(904) 00:18:21.467 fused_ordering(905) 00:18:21.467 fused_ordering(906) 00:18:21.467 fused_ordering(907) 00:18:21.467 fused_ordering(908) 00:18:21.467 fused_ordering(909) 00:18:21.467 fused_ordering(910) 00:18:21.467 fused_ordering(911) 00:18:21.467 fused_ordering(912) 00:18:21.467 fused_ordering(913) 00:18:21.467 fused_ordering(914) 00:18:21.467 fused_ordering(915) 00:18:21.467 fused_ordering(916) 00:18:21.467 fused_ordering(917) 00:18:21.467 fused_ordering(918) 00:18:21.467 fused_ordering(919) 00:18:21.467 fused_ordering(920) 00:18:21.467 fused_ordering(921) 00:18:21.467 fused_ordering(922) 00:18:21.467 fused_ordering(923) 00:18:21.467 fused_ordering(924) 00:18:21.467 fused_ordering(925) 00:18:21.467 fused_ordering(926) 00:18:21.467 fused_ordering(927) 00:18:21.467 fused_ordering(928) 00:18:21.467 fused_ordering(929) 00:18:21.467 fused_ordering(930) 00:18:21.467 fused_ordering(931) 00:18:21.467 fused_ordering(932) 00:18:21.467 fused_ordering(933) 00:18:21.467 fused_ordering(934) 00:18:21.467 fused_ordering(935) 00:18:21.467 fused_ordering(936) 00:18:21.467 fused_ordering(937) 00:18:21.467 fused_ordering(938) 00:18:21.467 fused_ordering(939) 00:18:21.467 fused_ordering(940) 00:18:21.467 fused_ordering(941) 00:18:21.467 fused_ordering(942) 00:18:21.467 fused_ordering(943) 00:18:21.467 fused_ordering(944) 00:18:21.467 fused_ordering(945) 00:18:21.467 fused_ordering(946) 00:18:21.467 fused_ordering(947) 00:18:21.467 fused_ordering(948) 00:18:21.467 fused_ordering(949) 00:18:21.467 fused_ordering(950) 00:18:21.467 fused_ordering(951) 00:18:21.467 fused_ordering(952) 00:18:21.467 fused_ordering(953) 00:18:21.467 fused_ordering(954) 00:18:21.467 fused_ordering(955) 00:18:21.467 fused_ordering(956) 00:18:21.467 fused_ordering(957) 00:18:21.467 fused_ordering(958) 00:18:21.467 fused_ordering(959) 00:18:21.467 fused_ordering(960) 00:18:21.467 fused_ordering(961) 00:18:21.467 fused_ordering(962) 00:18:21.467 fused_ordering(963) 00:18:21.467 fused_ordering(964) 00:18:21.467 fused_ordering(965) 00:18:21.467 fused_ordering(966) 00:18:21.467 fused_ordering(967) 00:18:21.467 fused_ordering(968) 00:18:21.467 fused_ordering(969) 00:18:21.467 fused_ordering(970) 00:18:21.467 fused_ordering(971) 00:18:21.467 fused_ordering(972) 00:18:21.467 fused_ordering(973) 00:18:21.467 fused_ordering(974) 00:18:21.467 fused_ordering(975) 00:18:21.467 fused_ordering(976) 00:18:21.467 fused_ordering(977) 00:18:21.467 fused_ordering(978) 00:18:21.467 fused_ordering(979) 00:18:21.467 fused_ordering(980) 00:18:21.467 fused_ordering(981) 00:18:21.467 fused_ordering(982) 00:18:21.467 fused_ordering(983) 00:18:21.467 fused_ordering(984) 00:18:21.467 fused_ordering(985) 00:18:21.467 fused_ordering(986) 00:18:21.467 fused_ordering(987) 00:18:21.467 fused_ordering(988) 00:18:21.467 fused_ordering(989) 00:18:21.467 fused_ordering(990) 00:18:21.467 fused_ordering(991) 00:18:21.467 fused_ordering(992) 00:18:21.467 fused_ordering(993) 00:18:21.467 fused_ordering(994) 00:18:21.467 fused_ordering(995) 00:18:21.467 fused_ordering(996) 00:18:21.467 fused_ordering(997) 00:18:21.467 fused_ordering(998) 00:18:21.467 fused_ordering(999) 00:18:21.467 fused_ordering(1000) 00:18:21.467 fused_ordering(1001) 00:18:21.467 fused_ordering(1002) 00:18:21.467 fused_ordering(1003) 00:18:21.467 fused_ordering(1004) 00:18:21.467 fused_ordering(1005) 00:18:21.467 fused_ordering(1006) 00:18:21.467 fused_ordering(1007) 00:18:21.467 fused_ordering(1008) 00:18:21.467 fused_ordering(1009) 00:18:21.467 fused_ordering(1010) 00:18:21.467 fused_ordering(1011) 00:18:21.467 fused_ordering(1012) 00:18:21.467 fused_ordering(1013) 00:18:21.467 fused_ordering(1014) 00:18:21.467 fused_ordering(1015) 00:18:21.467 fused_ordering(1016) 00:18:21.467 fused_ordering(1017) 00:18:21.467 fused_ordering(1018) 00:18:21.467 fused_ordering(1019) 00:18:21.467 fused_ordering(1020) 00:18:21.467 fused_ordering(1021) 00:18:21.467 fused_ordering(1022) 00:18:21.467 fused_ordering(1023) 00:18:21.467 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:21.467 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:21.467 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:21.468 rmmod nvme_rdma 00:18:21.468 rmmod nvme_fabrics 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 2818472 ']' 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 2818472 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2818472 ']' 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2818472 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.468 16:05:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2818472 00:18:21.468 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:21.468 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:21.468 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2818472' 00:18:21.468 killing process with pid 2818472 00:18:21.468 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2818472 00:18:21.468 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2818472 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:21.727 00:18:21.727 real 0m8.544s 00:18:21.727 user 0m3.965s 00:18:21.727 sys 0m5.729s 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:21.727 ************************************ 00:18:21.727 END TEST nvmf_fused_ordering 00:18:21.727 ************************************ 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.727 16:05:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.987 ************************************ 00:18:21.987 START TEST nvmf_ns_masking 00:18:21.987 ************************************ 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:21.987 * Looking for test storage... 00:18:21.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.987 --rc genhtml_branch_coverage=1 00:18:21.987 --rc genhtml_function_coverage=1 00:18:21.987 --rc genhtml_legend=1 00:18:21.987 --rc geninfo_all_blocks=1 00:18:21.987 --rc geninfo_unexecuted_blocks=1 00:18:21.987 00:18:21.987 ' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.987 --rc genhtml_branch_coverage=1 00:18:21.987 --rc genhtml_function_coverage=1 00:18:21.987 --rc genhtml_legend=1 00:18:21.987 --rc geninfo_all_blocks=1 00:18:21.987 --rc geninfo_unexecuted_blocks=1 00:18:21.987 00:18:21.987 ' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.987 --rc genhtml_branch_coverage=1 00:18:21.987 --rc genhtml_function_coverage=1 00:18:21.987 --rc genhtml_legend=1 00:18:21.987 --rc geninfo_all_blocks=1 00:18:21.987 --rc geninfo_unexecuted_blocks=1 00:18:21.987 00:18:21.987 ' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.987 --rc genhtml_branch_coverage=1 00:18:21.987 --rc genhtml_function_coverage=1 00:18:21.987 --rc genhtml_legend=1 00:18:21.987 --rc geninfo_all_blocks=1 00:18:21.987 --rc geninfo_unexecuted_blocks=1 00:18:21.987 00:18:21.987 ' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.987 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4f4ce7a4-4dcd-4bc0-83eb-b8e4785b8a4d 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=627eb695-269c-4eb9-99e1-cd077556c004 00:18:21.987 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=05967dfc-25ee-493c-8c21-df63e85cc88c 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.988 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.247 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:22.247 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:22.247 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.247 16:05:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:28.820 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:28.820 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.820 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:28.820 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:28.821 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # rdma_device_init 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:28.821 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:28.821 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:28.821 altname enp217s0f0np0 00:18:28.821 altname ens818f0np0 00:18:28.821 inet 192.168.100.8/24 scope global mlx_0_0 00:18:28.821 valid_lft forever preferred_lft forever 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:28.821 16:05:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:28.821 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:28.821 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:28.821 altname enp217s0f1np1 00:18:28.821 altname ens818f1np1 00:18:28.821 inet 192.168.100.9/24 scope global mlx_0_1 00:18:28.821 valid_lft forever preferred_lft forever 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:28.821 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:28.822 192.168.100.9' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:28.822 192.168.100.9' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # head -n 1 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:28.822 192.168.100.9' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # head -n 1 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # tail -n +2 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=2822087 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 2822087 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2822087 ']' 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:28.822 [2024-12-15 16:05:57.187617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:28.822 [2024-12-15 16:05:57.187673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.822 [2024-12-15 16:05:57.258911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.822 [2024-12-15 16:05:57.297249] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.822 [2024-12-15 16:05:57.297292] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.822 [2024-12-15 16:05:57.297302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.822 [2024-12-15 16:05:57.297310] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.822 [2024-12-15 16:05:57.297333] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.822 [2024-12-15 16:05:57.297356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.822 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:29.081 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.082 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:29.082 [2024-12-15 16:05:57.615148] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21b2bd0/0x21b70c0) succeed. 00:18:29.082 [2024-12-15 16:05:57.623718] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21b40d0/0x21f8760) succeed. 00:18:29.341 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:29.341 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:29.341 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:29.341 Malloc1 00:18:29.341 16:05:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:29.600 Malloc2 00:18:29.600 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:29.860 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:30.119 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:30.119 [2024-12-15 16:05:58.618254] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:30.119 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:30.119 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05967dfc-25ee-493c-8c21-df63e85cc88c -a 192.168.100.8 -s 4420 -i 4 00:18:30.378 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.378 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:30.378 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.378 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:30.378 16:05:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:32.916 16:06:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.916 [ 0]:0x1 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931dcaa0e5a34dc2af4588f59ddfe5b9 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931dcaa0e5a34dc2af4588f59ddfe5b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.916 [ 0]:0x1 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931dcaa0e5a34dc2af4588f59ddfe5b9 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931dcaa0e5a34dc2af4588f59ddfe5b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:32.916 [ 1]:0x2 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:32.916 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:33.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.176 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.435 16:06:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:33.694 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:33.694 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05967dfc-25ee-493c-8c21-df63e85cc88c -a 192.168.100.8 -s 4420 -i 4 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:33.953 16:06:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.859 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:36.119 [ 0]:0x2 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.119 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:36.378 [ 0]:0x1 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931dcaa0e5a34dc2af4588f59ddfe5b9 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931dcaa0e5a34dc2af4588f59ddfe5b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.378 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:36.379 [ 1]:0x2 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.379 16:06:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:36.638 [ 0]:0x2 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:36.638 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.639 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:36.639 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.898 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:37.157 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:37.157 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05967dfc-25ee-493c-8c21-df63e85cc88c -a 192.168.100.8 -s 4420 -i 4 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:37.415 16:06:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:39.952 16:06:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:39.952 [ 0]:0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931dcaa0e5a34dc2af4588f59ddfe5b9 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931dcaa0e5a34dc2af4588f59ddfe5b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:39.952 [ 1]:0x2 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:39.952 [ 0]:0x2 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:39.952 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:39.953 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:40.212 [2024-12-15 16:06:08.589346] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:40.212 request: 00:18:40.212 { 00:18:40.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.212 "nsid": 2, 00:18:40.212 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.212 "method": "nvmf_ns_remove_host", 00:18:40.212 "req_id": 1 00:18:40.212 } 00:18:40.212 Got JSON-RPC error response 00:18:40.212 response: 00:18:40.212 { 00:18:40.212 "code": -32602, 00:18:40.212 "message": "Invalid parameters" 00:18:40.212 } 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:40.212 [ 0]:0x2 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6665d4c7a3c641afa807f7d8ba7ff938 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6665d4c7a3c641afa807f7d8ba7ff938 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:40.212 16:06:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2824681 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2824681 /var/tmp/host.sock 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2824681 ']' 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:40.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.472 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:40.731 [2024-12-15 16:06:09.083894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:40.732 [2024-12-15 16:06:09.083948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824681 ] 00:18:40.732 [2024-12-15 16:06:09.155672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.732 [2024-12-15 16:06:09.194142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.991 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.991 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:40.991 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.250 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:41.250 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4f4ce7a4-4dcd-4bc0-83eb-b8e4785b8a4d 00:18:41.250 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:41.250 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4F4CE7A44DCD4BC083EBB8E4785B8A4D -i 00:18:41.509 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 627eb695-269c-4eb9-99e1-cd077556c004 00:18:41.509 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:41.509 16:06:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 627EB695269C4EB999E1CD077556C004 -i 00:18:41.769 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:41.769 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:42.028 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:42.028 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:42.287 nvme0n1 00:18:42.287 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:42.287 16:06:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:42.547 nvme1n2 00:18:42.547 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:42.547 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:42.547 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:42.547 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:42.547 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:42.806 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:42.806 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:42.806 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:42.806 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4f4ce7a4-4dcd-4bc0-83eb-b8e4785b8a4d == \4\f\4\c\e\7\a\4\-\4\d\c\d\-\4\b\c\0\-\8\3\e\b\-\b\8\e\4\7\8\5\b\8\a\4\d ]] 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 627eb695-269c-4eb9-99e1-cd077556c004 == \6\2\7\e\b\6\9\5\-\2\6\9\c\-\4\e\b\9\-\9\9\e\1\-\c\d\0\7\7\5\5\6\c\0\0\4 ]] 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2824681 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2824681 ']' 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2824681 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.066 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2824681 00:18:43.325 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:43.325 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:43.325 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2824681' 00:18:43.325 killing process with pid 2824681 00:18:43.325 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2824681 00:18:43.325 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2824681 00:18:43.585 16:06:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:43.845 rmmod nvme_rdma 00:18:43.845 rmmod nvme_fabrics 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 2822087 ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 2822087 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2822087 ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2822087 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2822087 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2822087' 00:18:43.845 killing process with pid 2822087 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2822087 00:18:43.845 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2822087 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:44.104 00:18:44.104 real 0m22.258s 00:18:44.104 user 0m24.787s 00:18:44.104 sys 0m7.306s 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.104 ************************************ 00:18:44.104 END TEST nvmf_ns_masking 00:18:44.104 ************************************ 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.104 ************************************ 00:18:44.104 START TEST nvmf_nvme_cli 00:18:44.104 ************************************ 00:18:44.104 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:44.364 * Looking for test storage... 00:18:44.364 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.364 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.365 --rc genhtml_branch_coverage=1 00:18:44.365 --rc genhtml_function_coverage=1 00:18:44.365 --rc genhtml_legend=1 00:18:44.365 --rc geninfo_all_blocks=1 00:18:44.365 --rc geninfo_unexecuted_blocks=1 00:18:44.365 00:18:44.365 ' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.365 --rc genhtml_branch_coverage=1 00:18:44.365 --rc genhtml_function_coverage=1 00:18:44.365 --rc genhtml_legend=1 00:18:44.365 --rc geninfo_all_blocks=1 00:18:44.365 --rc geninfo_unexecuted_blocks=1 00:18:44.365 00:18:44.365 ' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.365 --rc genhtml_branch_coverage=1 00:18:44.365 --rc genhtml_function_coverage=1 00:18:44.365 --rc genhtml_legend=1 00:18:44.365 --rc geninfo_all_blocks=1 00:18:44.365 --rc geninfo_unexecuted_blocks=1 00:18:44.365 00:18:44.365 ' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:44.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.365 --rc genhtml_branch_coverage=1 00:18:44.365 --rc genhtml_function_coverage=1 00:18:44.365 --rc genhtml_legend=1 00:18:44.365 --rc geninfo_all_blocks=1 00:18:44.365 --rc geninfo_unexecuted_blocks=1 00:18:44.365 00:18:44.365 ' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.365 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.365 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.366 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:44.366 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:44.366 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.366 16:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.064 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.064 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:51.064 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:51.064 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:51.064 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:51.065 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:51.065 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:51.065 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:51.065 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # rdma_device_init 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:51.065 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.065 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:51.065 altname enp217s0f0np0 00:18:51.065 altname ens818f0np0 00:18:51.065 inet 192.168.100.8/24 scope global mlx_0_0 00:18:51.065 valid_lft forever preferred_lft forever 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:51.065 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.065 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:51.065 altname enp217s0f1np1 00:18:51.065 altname ens818f1np1 00:18:51.065 inet 192.168.100.9/24 scope global mlx_0_1 00:18:51.065 valid_lft forever preferred_lft forever 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:51.065 192.168.100.9' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # head -n 1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:51.065 192.168.100.9' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:51.065 192.168.100.9' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # tail -n +2 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # head -n 1 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:51.065 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=2828666 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 2828666 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2828666 ']' 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.066 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.066 [2024-12-15 16:06:19.556418] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:51.066 [2024-12-15 16:06:19.556465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.066 [2024-12-15 16:06:19.621972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.325 [2024-12-15 16:06:19.663703] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.325 [2024-12-15 16:06:19.663742] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.325 [2024-12-15 16:06:19.663752] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.325 [2024-12-15 16:06:19.663760] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.325 [2024-12-15 16:06:19.663767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.325 [2024-12-15 16:06:19.663821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.325 [2024-12-15 16:06:19.664028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.325 [2024-12-15 16:06:19.664096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.325 [2024-12-15 16:06:19.664098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.325 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.325 [2024-12-15 16:06:19.835983] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe4fe40/0xe54330) succeed. 00:18:51.325 [2024-12-15 16:06:19.846356] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe51480/0xe959d0) succeed. 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.584 Malloc0 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.584 16:06:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.584 Malloc1 00:18:51.584 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.584 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:51.584 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.585 [2024-12-15 16:06:20.041516] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.585 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:18:51.585 00:18:51.585 Discovery Log Number of Records 2, Generation counter 2 00:18:51.585 =====Discovery Log Entry 0====== 00:18:51.585 trtype: rdma 00:18:51.585 adrfam: ipv4 00:18:51.585 subtype: current discovery subsystem 00:18:51.585 treq: not required 00:18:51.585 portid: 0 00:18:51.585 trsvcid: 4420 00:18:51.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.585 traddr: 192.168.100.8 00:18:51.585 eflags: explicit discovery connections, duplicate discovery information 00:18:51.585 rdma_prtype: not specified 00:18:51.585 rdma_qptype: connected 00:18:51.585 rdma_cms: rdma-cm 00:18:51.585 rdma_pkey: 0x0000 00:18:51.585 =====Discovery Log Entry 1====== 00:18:51.585 trtype: rdma 00:18:51.585 adrfam: ipv4 00:18:51.585 subtype: nvme subsystem 00:18:51.585 treq: not required 00:18:51.585 portid: 0 00:18:51.585 trsvcid: 4420 00:18:51.585 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:51.585 traddr: 192.168.100.8 00:18:51.585 eflags: none 00:18:51.585 rdma_prtype: not specified 00:18:51.585 rdma_qptype: connected 00:18:51.585 rdma_cms: rdma-cm 00:18:51.585 rdma_pkey: 0x0000 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:51.845 16:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:52.782 16:06:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:54.690 /dev/nvme0n2 ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:54.690 16:06:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:56.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:56.069 rmmod nvme_rdma 00:18:56.069 rmmod nvme_fabrics 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 2828666 ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 2828666 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2828666 ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2828666 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2828666 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2828666' 00:18:56.069 killing process with pid 2828666 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2828666 00:18:56.069 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2828666 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:56.329 00:18:56.329 real 0m12.043s 00:18:56.329 user 0m21.738s 00:18:56.329 sys 0m5.722s 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:56.329 ************************************ 00:18:56.329 END TEST nvmf_nvme_cli 00:18:56.329 ************************************ 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.329 ************************************ 00:18:56.329 START TEST nvmf_auth_target 00:18:56.329 ************************************ 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:56.329 * Looking for test storage... 00:18:56.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:56.329 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.589 --rc genhtml_branch_coverage=1 00:18:56.589 --rc genhtml_function_coverage=1 00:18:56.589 --rc genhtml_legend=1 00:18:56.589 --rc geninfo_all_blocks=1 00:18:56.589 --rc geninfo_unexecuted_blocks=1 00:18:56.589 00:18:56.589 ' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.589 --rc genhtml_branch_coverage=1 00:18:56.589 --rc genhtml_function_coverage=1 00:18:56.589 --rc genhtml_legend=1 00:18:56.589 --rc geninfo_all_blocks=1 00:18:56.589 --rc geninfo_unexecuted_blocks=1 00:18:56.589 00:18:56.589 ' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.589 --rc genhtml_branch_coverage=1 00:18:56.589 --rc genhtml_function_coverage=1 00:18:56.589 --rc genhtml_legend=1 00:18:56.589 --rc geninfo_all_blocks=1 00:18:56.589 --rc geninfo_unexecuted_blocks=1 00:18:56.589 00:18:56.589 ' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.589 --rc genhtml_branch_coverage=1 00:18:56.589 --rc genhtml_function_coverage=1 00:18:56.589 --rc genhtml_legend=1 00:18:56.589 --rc geninfo_all_blocks=1 00:18:56.589 --rc geninfo_unexecuted_blocks=1 00:18:56.589 00:18:56.589 ' 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:56.589 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.590 16:06:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.590 16:06:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:56.590 16:06:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:56.590 16:06:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.590 16:06:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:03.164 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:03.164 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:03.164 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:03.165 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:03.165 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # rdma_device_init 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:03.165 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.165 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:03.165 altname enp217s0f0np0 00:19:03.165 altname ens818f0np0 00:19:03.165 inet 192.168.100.8/24 scope global mlx_0_0 00:19:03.165 valid_lft forever preferred_lft forever 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:03.165 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.165 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:03.165 altname enp217s0f1np1 00:19:03.165 altname ens818f1np1 00:19:03.165 inet 192.168.100.9/24 scope global mlx_0_1 00:19:03.165 valid_lft forever preferred_lft forever 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.165 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:19:03.166 192.168.100.9' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:19:03.166 192.168.100.9' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # head -n 1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # head -n 1 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:19:03.166 192.168.100.9' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # tail -n +2 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2832870 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2832870 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2832870 ']' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.166 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2832969 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.425 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bc668e73ee0d3de4ac8b07c12e80d37355ecd885ed44c4d8 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.hmr 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bc668e73ee0d3de4ac8b07c12e80d37355ecd885ed44c4d8 0 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bc668e73ee0d3de4ac8b07c12e80d37355ecd885ed44c4d8 0 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bc668e73ee0d3de4ac8b07c12e80d37355ecd885ed44c4d8 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.hmr 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.hmr 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hmr 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:03.426 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c6334d0b0fe32b447bb7c67980edac64d70adcadc8acd24c3687b90a0e6b1d27 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.lpi 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c6334d0b0fe32b447bb7c67980edac64d70adcadc8acd24c3687b90a0e6b1d27 3 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c6334d0b0fe32b447bb7c67980edac64d70adcadc8acd24c3687b90a0e6b1d27 3 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c6334d0b0fe32b447bb7c67980edac64d70adcadc8acd24c3687b90a0e6b1d27 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:03.686 16:06:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.lpi 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.lpi 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.lpi 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9f75c751f2dc293a7c0a58aa17ff8630 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.MrA 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9f75c751f2dc293a7c0a58aa17ff8630 1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9f75c751f2dc293a7c0a58aa17ff8630 1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9f75c751f2dc293a7c0a58aa17ff8630 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.MrA 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.MrA 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.MrA 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c54f981aa9856512645b0149e399bf2b0118f58ed008c45d 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.j7H 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c54f981aa9856512645b0149e399bf2b0118f58ed008c45d 2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c54f981aa9856512645b0149e399bf2b0118f58ed008c45d 2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c54f981aa9856512645b0149e399bf2b0118f58ed008c45d 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.j7H 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.j7H 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.j7H 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=562dfa18b9bfd1e3ef6d4c7bbf5ab357b5441a70ae75d2d6 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.mvj 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 562dfa18b9bfd1e3ef6d4c7bbf5ab357b5441a70ae75d2d6 2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 562dfa18b9bfd1e3ef6d4c7bbf5ab357b5441a70ae75d2d6 2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=562dfa18b9bfd1e3ef6d4c7bbf5ab357b5441a70ae75d2d6 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.mvj 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.mvj 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.mvj 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b03011bdbf48f8609121b660a53d4abc 00:19:03.686 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Wos 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b03011bdbf48f8609121b660a53d4abc 1 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b03011bdbf48f8609121b660a53d4abc 1 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b03011bdbf48f8609121b660a53d4abc 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Wos 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Wos 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Wos 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4c94d9af6e9966fbe16e96d53b8b80b0e31b1cc1f6ab0de88d7421ee65e25ec6 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.UQT 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4c94d9af6e9966fbe16e96d53b8b80b0e31b1cc1f6ab0de88d7421ee65e25ec6 3 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4c94d9af6e9966fbe16e96d53b8b80b0e31b1cc1f6ab0de88d7421ee65e25ec6 3 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4c94d9af6e9966fbe16e96d53b8b80b0e31b1cc1f6ab0de88d7421ee65e25ec6 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.UQT 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.UQT 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.UQT 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2832870 00:19:03.946 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2832870 ']' 00:19:03.947 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.947 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.947 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.947 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.947 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2832969 /var/tmp/host.sock 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2832969 ']' 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:04.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.206 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hmr 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hmr 00:19:04.466 16:06:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hmr 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.lpi ]] 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lpi 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lpi 00:19:04.466 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lpi 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MrA 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MrA 00:19:04.726 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MrA 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.j7H ]] 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j7H 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j7H 00:19:04.985 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j7H 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mvj 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mvj 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mvj 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Wos ]] 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wos 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wos 00:19:05.245 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wos 00:19:05.505 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:05.505 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UQT 00:19:05.505 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.505 16:06:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.505 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.505 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.UQT 00:19:05.505 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.UQT 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.764 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.023 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.283 00:19:06.283 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.283 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.283 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.542 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.542 { 00:19:06.542 "cntlid": 1, 00:19:06.542 "qid": 0, 00:19:06.542 "state": "enabled", 00:19:06.542 "thread": "nvmf_tgt_poll_group_000", 00:19:06.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:06.542 "listen_address": { 00:19:06.542 "trtype": "RDMA", 00:19:06.542 "adrfam": "IPv4", 00:19:06.542 "traddr": "192.168.100.8", 00:19:06.543 "trsvcid": "4420" 00:19:06.543 }, 00:19:06.543 "peer_address": { 00:19:06.543 "trtype": "RDMA", 00:19:06.543 "adrfam": "IPv4", 00:19:06.543 "traddr": "192.168.100.8", 00:19:06.543 "trsvcid": "46140" 00:19:06.543 }, 00:19:06.543 "auth": { 00:19:06.543 "state": "completed", 00:19:06.543 "digest": "sha256", 00:19:06.543 "dhgroup": "null" 00:19:06.543 } 00:19:06.543 } 00:19:06.543 ]' 00:19:06.543 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.543 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.543 16:06:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.543 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.543 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.543 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.543 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.543 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.800 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:06.800 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:07.367 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.626 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.626 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.626 16:06:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.626 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.626 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.626 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.626 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.626 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.885 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.145 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.145 { 00:19:08.145 "cntlid": 3, 00:19:08.145 "qid": 0, 00:19:08.145 "state": "enabled", 00:19:08.145 "thread": "nvmf_tgt_poll_group_000", 00:19:08.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:08.145 "listen_address": { 00:19:08.145 "trtype": "RDMA", 00:19:08.145 "adrfam": "IPv4", 00:19:08.145 "traddr": "192.168.100.8", 00:19:08.145 "trsvcid": "4420" 00:19:08.145 }, 00:19:08.145 "peer_address": { 00:19:08.145 "trtype": "RDMA", 00:19:08.145 "adrfam": "IPv4", 00:19:08.145 "traddr": "192.168.100.8", 00:19:08.145 "trsvcid": "59494" 00:19:08.145 }, 00:19:08.145 "auth": { 00:19:08.145 "state": "completed", 00:19:08.145 "digest": "sha256", 00:19:08.145 "dhgroup": "null" 00:19:08.145 } 00:19:08.145 } 00:19:08.145 ]' 00:19:08.145 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.404 16:06:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.664 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:08.664 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.232 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.491 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.492 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.492 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.492 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.492 16:06:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.751 00:19:09.751 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.751 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.751 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.011 { 00:19:10.011 "cntlid": 5, 00:19:10.011 "qid": 0, 00:19:10.011 "state": "enabled", 00:19:10.011 "thread": "nvmf_tgt_poll_group_000", 00:19:10.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:10.011 "listen_address": { 00:19:10.011 "trtype": "RDMA", 00:19:10.011 "adrfam": "IPv4", 00:19:10.011 "traddr": "192.168.100.8", 00:19:10.011 "trsvcid": "4420" 00:19:10.011 }, 00:19:10.011 "peer_address": { 00:19:10.011 "trtype": "RDMA", 00:19:10.011 "adrfam": "IPv4", 00:19:10.011 "traddr": "192.168.100.8", 00:19:10.011 "trsvcid": "50189" 00:19:10.011 }, 00:19:10.011 "auth": { 00:19:10.011 "state": "completed", 00:19:10.011 "digest": "sha256", 00:19:10.011 "dhgroup": "null" 00:19:10.011 } 00:19:10.011 } 00:19:10.011 ]' 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.011 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.270 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:10.270 16:06:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:10.839 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.099 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.358 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.358 16:06:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.617 { 00:19:11.617 "cntlid": 7, 00:19:11.617 "qid": 0, 00:19:11.617 "state": "enabled", 00:19:11.617 "thread": "nvmf_tgt_poll_group_000", 00:19:11.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:11.617 "listen_address": { 00:19:11.617 "trtype": "RDMA", 00:19:11.617 "adrfam": "IPv4", 00:19:11.617 "traddr": "192.168.100.8", 00:19:11.617 "trsvcid": "4420" 00:19:11.617 }, 00:19:11.617 "peer_address": { 00:19:11.617 "trtype": "RDMA", 00:19:11.617 "adrfam": "IPv4", 00:19:11.617 "traddr": "192.168.100.8", 00:19:11.617 "trsvcid": "48040" 00:19:11.617 }, 00:19:11.617 "auth": { 00:19:11.617 "state": "completed", 00:19:11.617 "digest": "sha256", 00:19:11.617 "dhgroup": "null" 00:19:11.617 } 00:19:11.617 } 00:19:11.617 ]' 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.617 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.877 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:11.877 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.877 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.877 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.877 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.136 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:12.136 16:06:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.705 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.965 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.228 00:19:13.228 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.228 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.228 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.490 { 00:19:13.490 "cntlid": 9, 00:19:13.490 "qid": 0, 00:19:13.490 "state": "enabled", 00:19:13.490 "thread": "nvmf_tgt_poll_group_000", 00:19:13.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:13.490 "listen_address": { 00:19:13.490 "trtype": "RDMA", 00:19:13.490 "adrfam": "IPv4", 00:19:13.490 "traddr": "192.168.100.8", 00:19:13.490 "trsvcid": "4420" 00:19:13.490 }, 00:19:13.490 "peer_address": { 00:19:13.490 "trtype": "RDMA", 00:19:13.490 "adrfam": "IPv4", 00:19:13.490 "traddr": "192.168.100.8", 00:19:13.490 "trsvcid": "45113" 00:19:13.490 }, 00:19:13.490 "auth": { 00:19:13.490 "state": "completed", 00:19:13.490 "digest": "sha256", 00:19:13.490 "dhgroup": "ffdhe2048" 00:19:13.490 } 00:19:13.490 } 00:19:13.490 ]' 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.490 16:06:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.749 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:13.749 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:14.318 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.578 16:06:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.578 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.837 00:19:14.837 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.837 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.837 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.097 { 00:19:15.097 "cntlid": 11, 00:19:15.097 "qid": 0, 00:19:15.097 "state": "enabled", 00:19:15.097 "thread": "nvmf_tgt_poll_group_000", 00:19:15.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:15.097 "listen_address": { 00:19:15.097 "trtype": "RDMA", 00:19:15.097 "adrfam": "IPv4", 00:19:15.097 "traddr": "192.168.100.8", 00:19:15.097 "trsvcid": "4420" 00:19:15.097 }, 00:19:15.097 "peer_address": { 00:19:15.097 "trtype": "RDMA", 00:19:15.097 "adrfam": "IPv4", 00:19:15.097 "traddr": "192.168.100.8", 00:19:15.097 "trsvcid": "34124" 00:19:15.097 }, 00:19:15.097 "auth": { 00:19:15.097 "state": "completed", 00:19:15.097 "digest": "sha256", 00:19:15.097 "dhgroup": "ffdhe2048" 00:19:15.097 } 00:19:15.097 } 00:19:15.097 ]' 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.097 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.355 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.355 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.355 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.355 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:15.355 16:06:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.293 16:06:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.553 00:19:16.553 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.553 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.553 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.816 { 00:19:16.816 "cntlid": 13, 00:19:16.816 "qid": 0, 00:19:16.816 "state": "enabled", 00:19:16.816 "thread": "nvmf_tgt_poll_group_000", 00:19:16.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:16.816 "listen_address": { 00:19:16.816 "trtype": "RDMA", 00:19:16.816 "adrfam": "IPv4", 00:19:16.816 "traddr": "192.168.100.8", 00:19:16.816 "trsvcid": "4420" 00:19:16.816 }, 00:19:16.816 "peer_address": { 00:19:16.816 "trtype": "RDMA", 00:19:16.816 "adrfam": "IPv4", 00:19:16.816 "traddr": "192.168.100.8", 00:19:16.816 "trsvcid": "51015" 00:19:16.816 }, 00:19:16.816 "auth": { 00:19:16.816 "state": "completed", 00:19:16.816 "digest": "sha256", 00:19:16.816 "dhgroup": "ffdhe2048" 00:19:16.816 } 00:19:16.816 } 00:19:16.816 ]' 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.816 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.075 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.075 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.075 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.075 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.075 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.335 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:17.335 16:06:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.904 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.163 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.164 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.423 00:19:18.423 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.423 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.423 16:06:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.683 { 00:19:18.683 "cntlid": 15, 00:19:18.683 "qid": 0, 00:19:18.683 "state": "enabled", 00:19:18.683 "thread": "nvmf_tgt_poll_group_000", 00:19:18.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:18.683 "listen_address": { 00:19:18.683 "trtype": "RDMA", 00:19:18.683 "adrfam": "IPv4", 00:19:18.683 "traddr": "192.168.100.8", 00:19:18.683 "trsvcid": "4420" 00:19:18.683 }, 00:19:18.683 "peer_address": { 00:19:18.683 "trtype": "RDMA", 00:19:18.683 "adrfam": "IPv4", 00:19:18.683 "traddr": "192.168.100.8", 00:19:18.683 "trsvcid": "41131" 00:19:18.683 }, 00:19:18.683 "auth": { 00:19:18.683 "state": "completed", 00:19:18.683 "digest": "sha256", 00:19:18.683 "dhgroup": "ffdhe2048" 00:19:18.683 } 00:19:18.683 } 00:19:18.683 ]' 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.683 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.942 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:18.942 16:06:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:19.511 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.770 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.029 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:20.029 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.030 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.030 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.289 { 00:19:20.289 "cntlid": 17, 00:19:20.289 "qid": 0, 00:19:20.289 "state": "enabled", 00:19:20.289 "thread": "nvmf_tgt_poll_group_000", 00:19:20.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:20.289 "listen_address": { 00:19:20.289 "trtype": "RDMA", 00:19:20.289 "adrfam": "IPv4", 00:19:20.289 "traddr": "192.168.100.8", 00:19:20.289 "trsvcid": "4420" 00:19:20.289 }, 00:19:20.289 "peer_address": { 00:19:20.289 "trtype": "RDMA", 00:19:20.289 "adrfam": "IPv4", 00:19:20.289 "traddr": "192.168.100.8", 00:19:20.289 "trsvcid": "45881" 00:19:20.289 }, 00:19:20.289 "auth": { 00:19:20.289 "state": "completed", 00:19:20.289 "digest": "sha256", 00:19:20.289 "dhgroup": "ffdhe3072" 00:19:20.289 } 00:19:20.289 } 00:19:20.289 ]' 00:19:20.289 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.549 16:06:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.808 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:20.808 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.377 16:06:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.636 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.895 00:19:21.895 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.895 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.895 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.155 { 00:19:22.155 "cntlid": 19, 00:19:22.155 "qid": 0, 00:19:22.155 "state": "enabled", 00:19:22.155 "thread": "nvmf_tgt_poll_group_000", 00:19:22.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:22.155 "listen_address": { 00:19:22.155 "trtype": "RDMA", 00:19:22.155 "adrfam": "IPv4", 00:19:22.155 "traddr": "192.168.100.8", 00:19:22.155 "trsvcid": "4420" 00:19:22.155 }, 00:19:22.155 "peer_address": { 00:19:22.155 "trtype": "RDMA", 00:19:22.155 "adrfam": "IPv4", 00:19:22.155 "traddr": "192.168.100.8", 00:19:22.155 "trsvcid": "33121" 00:19:22.155 }, 00:19:22.155 "auth": { 00:19:22.155 "state": "completed", 00:19:22.155 "digest": "sha256", 00:19:22.155 "dhgroup": "ffdhe3072" 00:19:22.155 } 00:19:22.155 } 00:19:22.155 ]' 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.155 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.414 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:22.414 16:06:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:22.983 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.242 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.570 16:06:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.830 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.830 { 00:19:23.830 "cntlid": 21, 00:19:23.830 "qid": 0, 00:19:23.830 "state": "enabled", 00:19:23.830 "thread": "nvmf_tgt_poll_group_000", 00:19:23.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:23.830 "listen_address": { 00:19:23.830 "trtype": "RDMA", 00:19:23.830 "adrfam": "IPv4", 00:19:23.830 "traddr": "192.168.100.8", 00:19:23.830 "trsvcid": "4420" 00:19:23.830 }, 00:19:23.830 "peer_address": { 00:19:23.830 "trtype": "RDMA", 00:19:23.830 "adrfam": "IPv4", 00:19:23.830 "traddr": "192.168.100.8", 00:19:23.830 "trsvcid": "40114" 00:19:23.830 }, 00:19:23.830 "auth": { 00:19:23.830 "state": "completed", 00:19:23.830 "digest": "sha256", 00:19:23.830 "dhgroup": "ffdhe3072" 00:19:23.830 } 00:19:23.830 } 00:19:23.830 ]' 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.830 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.089 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.089 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.089 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.089 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.089 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.348 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:24.348 16:06:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.916 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.176 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.435 00:19:25.436 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.436 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.436 16:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.695 { 00:19:25.695 "cntlid": 23, 00:19:25.695 "qid": 0, 00:19:25.695 "state": "enabled", 00:19:25.695 "thread": "nvmf_tgt_poll_group_000", 00:19:25.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:25.695 "listen_address": { 00:19:25.695 "trtype": "RDMA", 00:19:25.695 "adrfam": "IPv4", 00:19:25.695 "traddr": "192.168.100.8", 00:19:25.695 "trsvcid": "4420" 00:19:25.695 }, 00:19:25.695 "peer_address": { 00:19:25.695 "trtype": "RDMA", 00:19:25.695 "adrfam": "IPv4", 00:19:25.695 "traddr": "192.168.100.8", 00:19:25.695 "trsvcid": "37442" 00:19:25.695 }, 00:19:25.695 "auth": { 00:19:25.695 "state": "completed", 00:19:25.695 "digest": "sha256", 00:19:25.695 "dhgroup": "ffdhe3072" 00:19:25.695 } 00:19:25.695 } 00:19:25.695 ]' 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.695 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.954 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:25.954 16:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:26.522 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.781 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.040 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.298 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.298 { 00:19:27.298 "cntlid": 25, 00:19:27.298 "qid": 0, 00:19:27.298 "state": "enabled", 00:19:27.298 "thread": "nvmf_tgt_poll_group_000", 00:19:27.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:27.298 "listen_address": { 00:19:27.298 "trtype": "RDMA", 00:19:27.298 "adrfam": "IPv4", 00:19:27.298 "traddr": "192.168.100.8", 00:19:27.298 "trsvcid": "4420" 00:19:27.298 }, 00:19:27.298 "peer_address": { 00:19:27.298 "trtype": "RDMA", 00:19:27.298 "adrfam": "IPv4", 00:19:27.298 "traddr": "192.168.100.8", 00:19:27.298 "trsvcid": "57218" 00:19:27.298 }, 00:19:27.298 "auth": { 00:19:27.298 "state": "completed", 00:19:27.298 "digest": "sha256", 00:19:27.298 "dhgroup": "ffdhe4096" 00:19:27.299 } 00:19:27.299 } 00:19:27.299 ]' 00:19:27.299 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.299 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.299 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.558 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.558 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.558 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.558 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.558 16:06:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.817 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:27.817 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.386 16:06:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.646 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.906 00:19:28.906 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.906 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.906 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.165 { 00:19:29.165 "cntlid": 27, 00:19:29.165 "qid": 0, 00:19:29.165 "state": "enabled", 00:19:29.165 "thread": "nvmf_tgt_poll_group_000", 00:19:29.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:29.165 "listen_address": { 00:19:29.165 "trtype": "RDMA", 00:19:29.165 "adrfam": "IPv4", 00:19:29.165 "traddr": "192.168.100.8", 00:19:29.165 "trsvcid": "4420" 00:19:29.165 }, 00:19:29.165 "peer_address": { 00:19:29.165 "trtype": "RDMA", 00:19:29.165 "adrfam": "IPv4", 00:19:29.165 "traddr": "192.168.100.8", 00:19:29.165 "trsvcid": "37235" 00:19:29.165 }, 00:19:29.165 "auth": { 00:19:29.165 "state": "completed", 00:19:29.165 "digest": "sha256", 00:19:29.165 "dhgroup": "ffdhe4096" 00:19:29.165 } 00:19:29.165 } 00:19:29.165 ]' 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.165 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.424 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:29.424 16:06:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:29.992 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.252 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.512 16:06:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.771 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.771 { 00:19:30.771 "cntlid": 29, 00:19:30.771 "qid": 0, 00:19:30.771 "state": "enabled", 00:19:30.771 "thread": "nvmf_tgt_poll_group_000", 00:19:30.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:30.771 "listen_address": { 00:19:30.771 "trtype": "RDMA", 00:19:30.771 "adrfam": "IPv4", 00:19:30.771 "traddr": "192.168.100.8", 00:19:30.771 "trsvcid": "4420" 00:19:30.771 }, 00:19:30.771 "peer_address": { 00:19:30.771 "trtype": "RDMA", 00:19:30.771 "adrfam": "IPv4", 00:19:30.771 "traddr": "192.168.100.8", 00:19:30.771 "trsvcid": "45122" 00:19:30.771 }, 00:19:30.771 "auth": { 00:19:30.771 "state": "completed", 00:19:30.771 "digest": "sha256", 00:19:30.771 "dhgroup": "ffdhe4096" 00:19:30.771 } 00:19:30.771 } 00:19:30.771 ]' 00:19:30.771 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.031 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.290 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:31.290 16:06:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.857 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.115 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:32.115 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.115 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.115 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.116 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.374 00:19:32.374 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.374 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.374 16:07:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.633 { 00:19:32.633 "cntlid": 31, 00:19:32.633 "qid": 0, 00:19:32.633 "state": "enabled", 00:19:32.633 "thread": "nvmf_tgt_poll_group_000", 00:19:32.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:32.633 "listen_address": { 00:19:32.633 "trtype": "RDMA", 00:19:32.633 "adrfam": "IPv4", 00:19:32.633 "traddr": "192.168.100.8", 00:19:32.633 "trsvcid": "4420" 00:19:32.633 }, 00:19:32.633 "peer_address": { 00:19:32.633 "trtype": "RDMA", 00:19:32.633 "adrfam": "IPv4", 00:19:32.633 "traddr": "192.168.100.8", 00:19:32.633 "trsvcid": "41015" 00:19:32.633 }, 00:19:32.633 "auth": { 00:19:32.633 "state": "completed", 00:19:32.633 "digest": "sha256", 00:19:32.633 "dhgroup": "ffdhe4096" 00:19:32.633 } 00:19:32.633 } 00:19:32.633 ]' 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.633 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.893 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.893 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.893 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.893 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:32.893 16:07:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:33.461 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.719 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.978 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.238 00:19:34.238 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.238 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.238 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.497 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.497 { 00:19:34.497 "cntlid": 33, 00:19:34.497 "qid": 0, 00:19:34.497 "state": "enabled", 00:19:34.497 "thread": "nvmf_tgt_poll_group_000", 00:19:34.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:34.498 "listen_address": { 00:19:34.498 "trtype": "RDMA", 00:19:34.498 "adrfam": "IPv4", 00:19:34.498 "traddr": "192.168.100.8", 00:19:34.498 "trsvcid": "4420" 00:19:34.498 }, 00:19:34.498 "peer_address": { 00:19:34.498 "trtype": "RDMA", 00:19:34.498 "adrfam": "IPv4", 00:19:34.498 "traddr": "192.168.100.8", 00:19:34.498 "trsvcid": "46631" 00:19:34.498 }, 00:19:34.498 "auth": { 00:19:34.498 "state": "completed", 00:19:34.498 "digest": "sha256", 00:19:34.498 "dhgroup": "ffdhe6144" 00:19:34.498 } 00:19:34.498 } 00:19:34.498 ]' 00:19:34.498 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.498 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.498 16:07:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.498 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.498 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.498 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.498 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.498 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.757 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:34.757 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:35.326 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.586 16:07:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.845 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.105 00:19:36.105 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.105 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.105 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.364 { 00:19:36.364 "cntlid": 35, 00:19:36.364 "qid": 0, 00:19:36.364 "state": "enabled", 00:19:36.364 "thread": "nvmf_tgt_poll_group_000", 00:19:36.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:36.364 "listen_address": { 00:19:36.364 "trtype": "RDMA", 00:19:36.364 "adrfam": "IPv4", 00:19:36.364 "traddr": "192.168.100.8", 00:19:36.364 "trsvcid": "4420" 00:19:36.364 }, 00:19:36.364 "peer_address": { 00:19:36.364 "trtype": "RDMA", 00:19:36.364 "adrfam": "IPv4", 00:19:36.364 "traddr": "192.168.100.8", 00:19:36.364 "trsvcid": "58146" 00:19:36.364 }, 00:19:36.364 "auth": { 00:19:36.364 "state": "completed", 00:19:36.364 "digest": "sha256", 00:19:36.364 "dhgroup": "ffdhe6144" 00:19:36.364 } 00:19:36.364 } 00:19:36.364 ]' 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.364 16:07:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.624 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:36.624 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:37.190 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.449 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:37.449 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.449 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.450 16:07:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.033 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.033 { 00:19:38.033 "cntlid": 37, 00:19:38.033 "qid": 0, 00:19:38.033 "state": "enabled", 00:19:38.033 "thread": "nvmf_tgt_poll_group_000", 00:19:38.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:38.033 "listen_address": { 00:19:38.033 "trtype": "RDMA", 00:19:38.033 "adrfam": "IPv4", 00:19:38.033 "traddr": "192.168.100.8", 00:19:38.033 "trsvcid": "4420" 00:19:38.033 }, 00:19:38.033 "peer_address": { 00:19:38.033 "trtype": "RDMA", 00:19:38.033 "adrfam": "IPv4", 00:19:38.033 "traddr": "192.168.100.8", 00:19:38.033 "trsvcid": "59629" 00:19:38.033 }, 00:19:38.033 "auth": { 00:19:38.033 "state": "completed", 00:19:38.033 "digest": "sha256", 00:19:38.033 "dhgroup": "ffdhe6144" 00:19:38.033 } 00:19:38.033 } 00:19:38.033 ]' 00:19:38.033 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.292 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.551 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:38.551 16:07:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.118 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.377 16:07:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.636 00:19:39.636 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.636 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.636 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.895 { 00:19:39.895 "cntlid": 39, 00:19:39.895 "qid": 0, 00:19:39.895 "state": "enabled", 00:19:39.895 "thread": "nvmf_tgt_poll_group_000", 00:19:39.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:39.895 "listen_address": { 00:19:39.895 "trtype": "RDMA", 00:19:39.895 "adrfam": "IPv4", 00:19:39.895 "traddr": "192.168.100.8", 00:19:39.895 "trsvcid": "4420" 00:19:39.895 }, 00:19:39.895 "peer_address": { 00:19:39.895 "trtype": "RDMA", 00:19:39.895 "adrfam": "IPv4", 00:19:39.895 "traddr": "192.168.100.8", 00:19:39.895 "trsvcid": "35567" 00:19:39.895 }, 00:19:39.895 "auth": { 00:19:39.895 "state": "completed", 00:19:39.895 "digest": "sha256", 00:19:39.895 "dhgroup": "ffdhe6144" 00:19:39.895 } 00:19:39.895 } 00:19:39.895 ]' 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.895 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.154 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.154 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.154 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.154 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:40.154 16:07:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.092 16:07:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.660 00:19:41.660 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.660 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.660 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.919 { 00:19:41.919 "cntlid": 41, 00:19:41.919 "qid": 0, 00:19:41.919 "state": "enabled", 00:19:41.919 "thread": "nvmf_tgt_poll_group_000", 00:19:41.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:41.919 "listen_address": { 00:19:41.919 "trtype": "RDMA", 00:19:41.919 "adrfam": "IPv4", 00:19:41.919 "traddr": "192.168.100.8", 00:19:41.919 "trsvcid": "4420" 00:19:41.919 }, 00:19:41.919 "peer_address": { 00:19:41.919 "trtype": "RDMA", 00:19:41.919 "adrfam": "IPv4", 00:19:41.919 "traddr": "192.168.100.8", 00:19:41.919 "trsvcid": "33420" 00:19:41.919 }, 00:19:41.919 "auth": { 00:19:41.919 "state": "completed", 00:19:41.919 "digest": "sha256", 00:19:41.919 "dhgroup": "ffdhe8192" 00:19:41.919 } 00:19:41.919 } 00:19:41.919 ]' 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.919 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.178 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:42.178 16:07:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:42.747 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.006 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.265 16:07:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.525 00:19:43.525 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.525 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.525 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.784 { 00:19:43.784 "cntlid": 43, 00:19:43.784 "qid": 0, 00:19:43.784 "state": "enabled", 00:19:43.784 "thread": "nvmf_tgt_poll_group_000", 00:19:43.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:43.784 "listen_address": { 00:19:43.784 "trtype": "RDMA", 00:19:43.784 "adrfam": "IPv4", 00:19:43.784 "traddr": "192.168.100.8", 00:19:43.784 "trsvcid": "4420" 00:19:43.784 }, 00:19:43.784 "peer_address": { 00:19:43.784 "trtype": "RDMA", 00:19:43.784 "adrfam": "IPv4", 00:19:43.784 "traddr": "192.168.100.8", 00:19:43.784 "trsvcid": "34133" 00:19:43.784 }, 00:19:43.784 "auth": { 00:19:43.784 "state": "completed", 00:19:43.784 "digest": "sha256", 00:19:43.784 "dhgroup": "ffdhe8192" 00:19:43.784 } 00:19:43.784 } 00:19:43.784 ]' 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.784 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.043 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.043 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.043 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.043 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.043 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.302 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:44.302 16:07:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.965 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.224 16:07:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.484 00:19:45.484 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.484 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.484 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.743 { 00:19:45.743 "cntlid": 45, 00:19:45.743 "qid": 0, 00:19:45.743 "state": "enabled", 00:19:45.743 "thread": "nvmf_tgt_poll_group_000", 00:19:45.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:45.743 "listen_address": { 00:19:45.743 "trtype": "RDMA", 00:19:45.743 "adrfam": "IPv4", 00:19:45.743 "traddr": "192.168.100.8", 00:19:45.743 "trsvcid": "4420" 00:19:45.743 }, 00:19:45.743 "peer_address": { 00:19:45.743 "trtype": "RDMA", 00:19:45.743 "adrfam": "IPv4", 00:19:45.743 "traddr": "192.168.100.8", 00:19:45.743 "trsvcid": "54594" 00:19:45.743 }, 00:19:45.743 "auth": { 00:19:45.743 "state": "completed", 00:19:45.743 "digest": "sha256", 00:19:45.743 "dhgroup": "ffdhe8192" 00:19:45.743 } 00:19:45.743 } 00:19:45.743 ]' 00:19:45.743 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.002 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.261 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:46.261 16:07:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.829 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.088 16:07:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.657 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.657 { 00:19:47.657 "cntlid": 47, 00:19:47.657 "qid": 0, 00:19:47.657 "state": "enabled", 00:19:47.657 "thread": "nvmf_tgt_poll_group_000", 00:19:47.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:47.657 "listen_address": { 00:19:47.657 "trtype": "RDMA", 00:19:47.657 "adrfam": "IPv4", 00:19:47.657 "traddr": "192.168.100.8", 00:19:47.657 "trsvcid": "4420" 00:19:47.657 }, 00:19:47.657 "peer_address": { 00:19:47.657 "trtype": "RDMA", 00:19:47.657 "adrfam": "IPv4", 00:19:47.657 "traddr": "192.168.100.8", 00:19:47.657 "trsvcid": "35543" 00:19:47.657 }, 00:19:47.657 "auth": { 00:19:47.657 "state": "completed", 00:19:47.657 "digest": "sha256", 00:19:47.657 "dhgroup": "ffdhe8192" 00:19:47.657 } 00:19:47.657 } 00:19:47.657 ]' 00:19:47.657 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.916 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.176 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:48.176 16:07:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.744 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.003 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.263 00:19:49.263 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.263 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.263 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.523 { 00:19:49.523 "cntlid": 49, 00:19:49.523 "qid": 0, 00:19:49.523 "state": "enabled", 00:19:49.523 "thread": "nvmf_tgt_poll_group_000", 00:19:49.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:49.523 "listen_address": { 00:19:49.523 "trtype": "RDMA", 00:19:49.523 "adrfam": "IPv4", 00:19:49.523 "traddr": "192.168.100.8", 00:19:49.523 "trsvcid": "4420" 00:19:49.523 }, 00:19:49.523 "peer_address": { 00:19:49.523 "trtype": "RDMA", 00:19:49.523 "adrfam": "IPv4", 00:19:49.523 "traddr": "192.168.100.8", 00:19:49.523 "trsvcid": "38117" 00:19:49.523 }, 00:19:49.523 "auth": { 00:19:49.523 "state": "completed", 00:19:49.523 "digest": "sha384", 00:19:49.523 "dhgroup": "null" 00:19:49.523 } 00:19:49.523 } 00:19:49.523 ]' 00:19:49.523 16:07:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.523 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.523 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.523 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.523 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.782 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.782 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.782 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.782 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:49.782 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:50.719 16:07:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.719 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.979 00:19:50.979 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.979 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.979 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.239 { 00:19:51.239 "cntlid": 51, 00:19:51.239 "qid": 0, 00:19:51.239 "state": "enabled", 00:19:51.239 "thread": "nvmf_tgt_poll_group_000", 00:19:51.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:51.239 "listen_address": { 00:19:51.239 "trtype": "RDMA", 00:19:51.239 "adrfam": "IPv4", 00:19:51.239 "traddr": "192.168.100.8", 00:19:51.239 "trsvcid": "4420" 00:19:51.239 }, 00:19:51.239 "peer_address": { 00:19:51.239 "trtype": "RDMA", 00:19:51.239 "adrfam": "IPv4", 00:19:51.239 "traddr": "192.168.100.8", 00:19:51.239 "trsvcid": "50429" 00:19:51.239 }, 00:19:51.239 "auth": { 00:19:51.239 "state": "completed", 00:19:51.239 "digest": "sha384", 00:19:51.239 "dhgroup": "null" 00:19:51.239 } 00:19:51.239 } 00:19:51.239 ]' 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.239 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.499 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.499 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.499 16:07:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.499 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:51.499 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:52.437 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.438 16:07:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.697 00:19:52.697 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.697 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.697 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.956 { 00:19:52.956 "cntlid": 53, 00:19:52.956 "qid": 0, 00:19:52.956 "state": "enabled", 00:19:52.956 "thread": "nvmf_tgt_poll_group_000", 00:19:52.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:52.956 "listen_address": { 00:19:52.956 "trtype": "RDMA", 00:19:52.956 "adrfam": "IPv4", 00:19:52.956 "traddr": "192.168.100.8", 00:19:52.956 "trsvcid": "4420" 00:19:52.956 }, 00:19:52.956 "peer_address": { 00:19:52.956 "trtype": "RDMA", 00:19:52.956 "adrfam": "IPv4", 00:19:52.956 "traddr": "192.168.100.8", 00:19:52.956 "trsvcid": "37191" 00:19:52.956 }, 00:19:52.956 "auth": { 00:19:52.956 "state": "completed", 00:19:52.956 "digest": "sha384", 00:19:52.956 "dhgroup": "null" 00:19:52.956 } 00:19:52.956 } 00:19:52.956 ]' 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.956 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.215 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.215 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.215 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.215 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.215 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.474 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:53.474 16:07:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.044 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.304 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.564 00:19:54.564 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.564 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.564 16:07:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.823 { 00:19:54.823 "cntlid": 55, 00:19:54.823 "qid": 0, 00:19:54.823 "state": "enabled", 00:19:54.823 "thread": "nvmf_tgt_poll_group_000", 00:19:54.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:54.823 "listen_address": { 00:19:54.823 "trtype": "RDMA", 00:19:54.823 "adrfam": "IPv4", 00:19:54.823 "traddr": "192.168.100.8", 00:19:54.823 "trsvcid": "4420" 00:19:54.823 }, 00:19:54.823 "peer_address": { 00:19:54.823 "trtype": "RDMA", 00:19:54.823 "adrfam": "IPv4", 00:19:54.823 "traddr": "192.168.100.8", 00:19:54.823 "trsvcid": "50524" 00:19:54.823 }, 00:19:54.823 "auth": { 00:19:54.823 "state": "completed", 00:19:54.823 "digest": "sha384", 00:19:54.823 "dhgroup": "null" 00:19:54.823 } 00:19:54.823 } 00:19:54.823 ]' 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.823 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.082 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:55.082 16:07:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:19:55.651 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.911 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.170 00:19:56.170 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.170 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.170 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.430 { 00:19:56.430 "cntlid": 57, 00:19:56.430 "qid": 0, 00:19:56.430 "state": "enabled", 00:19:56.430 "thread": "nvmf_tgt_poll_group_000", 00:19:56.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:56.430 "listen_address": { 00:19:56.430 "trtype": "RDMA", 00:19:56.430 "adrfam": "IPv4", 00:19:56.430 "traddr": "192.168.100.8", 00:19:56.430 "trsvcid": "4420" 00:19:56.430 }, 00:19:56.430 "peer_address": { 00:19:56.430 "trtype": "RDMA", 00:19:56.430 "adrfam": "IPv4", 00:19:56.430 "traddr": "192.168.100.8", 00:19:56.430 "trsvcid": "44438" 00:19:56.430 }, 00:19:56.430 "auth": { 00:19:56.430 "state": "completed", 00:19:56.430 "digest": "sha384", 00:19:56.430 "dhgroup": "ffdhe2048" 00:19:56.430 } 00:19:56.430 } 00:19:56.430 ]' 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.430 16:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:56.689 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.627 16:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.627 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.886 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.886 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.146 { 00:19:58.146 "cntlid": 59, 00:19:58.146 "qid": 0, 00:19:58.146 "state": "enabled", 00:19:58.146 "thread": "nvmf_tgt_poll_group_000", 00:19:58.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:58.146 "listen_address": { 00:19:58.146 "trtype": "RDMA", 00:19:58.146 "adrfam": "IPv4", 00:19:58.146 "traddr": "192.168.100.8", 00:19:58.146 "trsvcid": "4420" 00:19:58.146 }, 00:19:58.146 "peer_address": { 00:19:58.146 "trtype": "RDMA", 00:19:58.146 "adrfam": "IPv4", 00:19:58.146 "traddr": "192.168.100.8", 00:19:58.146 "trsvcid": "50534" 00:19:58.146 }, 00:19:58.146 "auth": { 00:19:58.146 "state": "completed", 00:19:58.146 "digest": "sha384", 00:19:58.146 "dhgroup": "ffdhe2048" 00:19:58.146 } 00:19:58.146 } 00:19:58.146 ]' 00:19:58.146 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.405 16:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.665 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:58.665 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.233 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.493 16:07:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.752 00:19:59.752 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.752 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.752 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.025 { 00:20:00.025 "cntlid": 61, 00:20:00.025 "qid": 0, 00:20:00.025 "state": "enabled", 00:20:00.025 "thread": "nvmf_tgt_poll_group_000", 00:20:00.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:00.025 "listen_address": { 00:20:00.025 "trtype": "RDMA", 00:20:00.025 "adrfam": "IPv4", 00:20:00.025 "traddr": "192.168.100.8", 00:20:00.025 "trsvcid": "4420" 00:20:00.025 }, 00:20:00.025 "peer_address": { 00:20:00.025 "trtype": "RDMA", 00:20:00.025 "adrfam": "IPv4", 00:20:00.025 "traddr": "192.168.100.8", 00:20:00.025 "trsvcid": "35746" 00:20:00.025 }, 00:20:00.025 "auth": { 00:20:00.025 "state": "completed", 00:20:00.025 "digest": "sha384", 00:20:00.025 "dhgroup": "ffdhe2048" 00:20:00.025 } 00:20:00.025 } 00:20:00.025 ]' 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.025 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.286 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:00.286 16:07:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:00.854 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.114 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.374 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.633 00:20:01.633 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.633 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.633 16:07:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.633 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.633 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.633 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.892 { 00:20:01.892 "cntlid": 63, 00:20:01.892 "qid": 0, 00:20:01.892 "state": "enabled", 00:20:01.892 "thread": "nvmf_tgt_poll_group_000", 00:20:01.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:01.892 "listen_address": { 00:20:01.892 "trtype": "RDMA", 00:20:01.892 "adrfam": "IPv4", 00:20:01.892 "traddr": "192.168.100.8", 00:20:01.892 "trsvcid": "4420" 00:20:01.892 }, 00:20:01.892 "peer_address": { 00:20:01.892 "trtype": "RDMA", 00:20:01.892 "adrfam": "IPv4", 00:20:01.892 "traddr": "192.168.100.8", 00:20:01.892 "trsvcid": "49011" 00:20:01.892 }, 00:20:01.892 "auth": { 00:20:01.892 "state": "completed", 00:20:01.892 "digest": "sha384", 00:20:01.892 "dhgroup": "ffdhe2048" 00:20:01.892 } 00:20:01.892 } 00:20:01.892 ]' 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.892 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.151 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:02.151 16:07:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.720 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.979 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.238 00:20:03.238 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.239 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.239 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.498 { 00:20:03.498 "cntlid": 65, 00:20:03.498 "qid": 0, 00:20:03.498 "state": "enabled", 00:20:03.498 "thread": "nvmf_tgt_poll_group_000", 00:20:03.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:03.498 "listen_address": { 00:20:03.498 "trtype": "RDMA", 00:20:03.498 "adrfam": "IPv4", 00:20:03.498 "traddr": "192.168.100.8", 00:20:03.498 "trsvcid": "4420" 00:20:03.498 }, 00:20:03.498 "peer_address": { 00:20:03.498 "trtype": "RDMA", 00:20:03.498 "adrfam": "IPv4", 00:20:03.498 "traddr": "192.168.100.8", 00:20:03.498 "trsvcid": "59259" 00:20:03.498 }, 00:20:03.498 "auth": { 00:20:03.498 "state": "completed", 00:20:03.498 "digest": "sha384", 00:20:03.498 "dhgroup": "ffdhe3072" 00:20:03.498 } 00:20:03.498 } 00:20:03.498 ]' 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.498 16:07:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.498 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.498 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.757 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.757 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.757 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.757 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:03.757 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:04.325 16:07:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.584 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.844 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.103 00:20:05.103 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.103 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.103 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.362 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.362 { 00:20:05.362 "cntlid": 67, 00:20:05.363 "qid": 0, 00:20:05.363 "state": "enabled", 00:20:05.363 "thread": "nvmf_tgt_poll_group_000", 00:20:05.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:05.363 "listen_address": { 00:20:05.363 "trtype": "RDMA", 00:20:05.363 "adrfam": "IPv4", 00:20:05.363 "traddr": "192.168.100.8", 00:20:05.363 "trsvcid": "4420" 00:20:05.363 }, 00:20:05.363 "peer_address": { 00:20:05.363 "trtype": "RDMA", 00:20:05.363 "adrfam": "IPv4", 00:20:05.363 "traddr": "192.168.100.8", 00:20:05.363 "trsvcid": "57127" 00:20:05.363 }, 00:20:05.363 "auth": { 00:20:05.363 "state": "completed", 00:20:05.363 "digest": "sha384", 00:20:05.363 "dhgroup": "ffdhe3072" 00:20:05.363 } 00:20:05.363 } 00:20:05.363 ]' 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.363 16:07:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.621 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:05.621 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.189 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.449 16:07:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.716 00:20:06.716 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.716 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.716 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.977 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.977 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.977 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.977 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.978 { 00:20:06.978 "cntlid": 69, 00:20:06.978 "qid": 0, 00:20:06.978 "state": "enabled", 00:20:06.978 "thread": "nvmf_tgt_poll_group_000", 00:20:06.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:06.978 "listen_address": { 00:20:06.978 "trtype": "RDMA", 00:20:06.978 "adrfam": "IPv4", 00:20:06.978 "traddr": "192.168.100.8", 00:20:06.978 "trsvcid": "4420" 00:20:06.978 }, 00:20:06.978 "peer_address": { 00:20:06.978 "trtype": "RDMA", 00:20:06.978 "adrfam": "IPv4", 00:20:06.978 "traddr": "192.168.100.8", 00:20:06.978 "trsvcid": "43783" 00:20:06.978 }, 00:20:06.978 "auth": { 00:20:06.978 "state": "completed", 00:20:06.978 "digest": "sha384", 00:20:06.978 "dhgroup": "ffdhe3072" 00:20:06.978 } 00:20:06.978 } 00:20:06.978 ]' 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.978 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.237 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:07.237 16:07:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:07.805 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.065 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.324 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.584 00:20:08.584 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.584 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.584 16:07:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.584 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.584 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.584 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.584 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.843 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.843 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.843 { 00:20:08.843 "cntlid": 71, 00:20:08.843 "qid": 0, 00:20:08.843 "state": "enabled", 00:20:08.843 "thread": "nvmf_tgt_poll_group_000", 00:20:08.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:08.843 "listen_address": { 00:20:08.843 "trtype": "RDMA", 00:20:08.843 "adrfam": "IPv4", 00:20:08.843 "traddr": "192.168.100.8", 00:20:08.843 "trsvcid": "4420" 00:20:08.843 }, 00:20:08.843 "peer_address": { 00:20:08.844 "trtype": "RDMA", 00:20:08.844 "adrfam": "IPv4", 00:20:08.844 "traddr": "192.168.100.8", 00:20:08.844 "trsvcid": "53083" 00:20:08.844 }, 00:20:08.844 "auth": { 00:20:08.844 "state": "completed", 00:20:08.844 "digest": "sha384", 00:20:08.844 "dhgroup": "ffdhe3072" 00:20:08.844 } 00:20:08.844 } 00:20:08.844 ]' 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.844 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.148 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:09.148 16:07:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.776 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.036 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.295 00:20:10.295 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.295 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.295 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.555 { 00:20:10.555 "cntlid": 73, 00:20:10.555 "qid": 0, 00:20:10.555 "state": "enabled", 00:20:10.555 "thread": "nvmf_tgt_poll_group_000", 00:20:10.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:10.555 "listen_address": { 00:20:10.555 "trtype": "RDMA", 00:20:10.555 "adrfam": "IPv4", 00:20:10.555 "traddr": "192.168.100.8", 00:20:10.555 "trsvcid": "4420" 00:20:10.555 }, 00:20:10.555 "peer_address": { 00:20:10.555 "trtype": "RDMA", 00:20:10.555 "adrfam": "IPv4", 00:20:10.555 "traddr": "192.168.100.8", 00:20:10.555 "trsvcid": "52355" 00:20:10.555 }, 00:20:10.555 "auth": { 00:20:10.555 "state": "completed", 00:20:10.555 "digest": "sha384", 00:20:10.555 "dhgroup": "ffdhe4096" 00:20:10.555 } 00:20:10.555 } 00:20:10.555 ]' 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.555 16:07:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.555 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.555 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.555 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.814 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:10.814 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.382 16:07:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.641 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.642 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.901 00:20:11.901 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.901 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.901 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.160 { 00:20:12.160 "cntlid": 75, 00:20:12.160 "qid": 0, 00:20:12.160 "state": "enabled", 00:20:12.160 "thread": "nvmf_tgt_poll_group_000", 00:20:12.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:12.160 "listen_address": { 00:20:12.160 "trtype": "RDMA", 00:20:12.160 "adrfam": "IPv4", 00:20:12.160 "traddr": "192.168.100.8", 00:20:12.160 "trsvcid": "4420" 00:20:12.160 }, 00:20:12.160 "peer_address": { 00:20:12.160 "trtype": "RDMA", 00:20:12.160 "adrfam": "IPv4", 00:20:12.160 "traddr": "192.168.100.8", 00:20:12.160 "trsvcid": "48686" 00:20:12.160 }, 00:20:12.160 "auth": { 00:20:12.160 "state": "completed", 00:20:12.160 "digest": "sha384", 00:20:12.160 "dhgroup": "ffdhe4096" 00:20:12.160 } 00:20:12.160 } 00:20:12.160 ]' 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.160 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.420 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.420 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.420 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.420 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:12.420 16:07:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:12.988 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.247 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.506 16:07:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.766 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.766 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.026 { 00:20:14.026 "cntlid": 77, 00:20:14.026 "qid": 0, 00:20:14.026 "state": "enabled", 00:20:14.026 "thread": "nvmf_tgt_poll_group_000", 00:20:14.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:14.026 "listen_address": { 00:20:14.026 "trtype": "RDMA", 00:20:14.026 "adrfam": "IPv4", 00:20:14.026 "traddr": "192.168.100.8", 00:20:14.026 "trsvcid": "4420" 00:20:14.026 }, 00:20:14.026 "peer_address": { 00:20:14.026 "trtype": "RDMA", 00:20:14.026 "adrfam": "IPv4", 00:20:14.026 "traddr": "192.168.100.8", 00:20:14.026 "trsvcid": "45952" 00:20:14.026 }, 00:20:14.026 "auth": { 00:20:14.026 "state": "completed", 00:20:14.026 "digest": "sha384", 00:20:14.026 "dhgroup": "ffdhe4096" 00:20:14.026 } 00:20:14.026 } 00:20:14.026 ]' 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.026 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.285 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:14.285 16:07:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:14.853 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.853 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:14.853 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.853 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.854 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.854 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.854 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.854 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.113 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.372 00:20:15.372 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.372 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.372 16:07:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.632 { 00:20:15.632 "cntlid": 79, 00:20:15.632 "qid": 0, 00:20:15.632 "state": "enabled", 00:20:15.632 "thread": "nvmf_tgt_poll_group_000", 00:20:15.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:15.632 "listen_address": { 00:20:15.632 "trtype": "RDMA", 00:20:15.632 "adrfam": "IPv4", 00:20:15.632 "traddr": "192.168.100.8", 00:20:15.632 "trsvcid": "4420" 00:20:15.632 }, 00:20:15.632 "peer_address": { 00:20:15.632 "trtype": "RDMA", 00:20:15.632 "adrfam": "IPv4", 00:20:15.632 "traddr": "192.168.100.8", 00:20:15.632 "trsvcid": "60652" 00:20:15.632 }, 00:20:15.632 "auth": { 00:20:15.632 "state": "completed", 00:20:15.632 "digest": "sha384", 00:20:15.632 "dhgroup": "ffdhe4096" 00:20:15.632 } 00:20:15.632 } 00:20:15.632 ]' 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.632 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.892 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.892 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.892 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.892 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:15.892 16:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:16.461 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.719 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.977 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.978 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.978 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.236 00:20:17.236 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.236 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.236 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.496 { 00:20:17.496 "cntlid": 81, 00:20:17.496 "qid": 0, 00:20:17.496 "state": "enabled", 00:20:17.496 "thread": "nvmf_tgt_poll_group_000", 00:20:17.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:17.496 "listen_address": { 00:20:17.496 "trtype": "RDMA", 00:20:17.496 "adrfam": "IPv4", 00:20:17.496 "traddr": "192.168.100.8", 00:20:17.496 "trsvcid": "4420" 00:20:17.496 }, 00:20:17.496 "peer_address": { 00:20:17.496 "trtype": "RDMA", 00:20:17.496 "adrfam": "IPv4", 00:20:17.496 "traddr": "192.168.100.8", 00:20:17.496 "trsvcid": "40860" 00:20:17.496 }, 00:20:17.496 "auth": { 00:20:17.496 "state": "completed", 00:20:17.496 "digest": "sha384", 00:20:17.496 "dhgroup": "ffdhe6144" 00:20:17.496 } 00:20:17.496 } 00:20:17.496 ]' 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.496 16:07:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.496 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.496 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.496 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.755 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:17.755 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:18.324 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.583 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.584 16:07:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.584 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.843 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.102 00:20:19.102 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.102 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.102 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.362 { 00:20:19.362 "cntlid": 83, 00:20:19.362 "qid": 0, 00:20:19.362 "state": "enabled", 00:20:19.362 "thread": "nvmf_tgt_poll_group_000", 00:20:19.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:19.362 "listen_address": { 00:20:19.362 "trtype": "RDMA", 00:20:19.362 "adrfam": "IPv4", 00:20:19.362 "traddr": "192.168.100.8", 00:20:19.362 "trsvcid": "4420" 00:20:19.362 }, 00:20:19.362 "peer_address": { 00:20:19.362 "trtype": "RDMA", 00:20:19.362 "adrfam": "IPv4", 00:20:19.362 "traddr": "192.168.100.8", 00:20:19.362 "trsvcid": "58789" 00:20:19.362 }, 00:20:19.362 "auth": { 00:20:19.362 "state": "completed", 00:20:19.362 "digest": "sha384", 00:20:19.362 "dhgroup": "ffdhe6144" 00:20:19.362 } 00:20:19.362 } 00:20:19.362 ]' 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.362 16:07:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.622 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:19.622 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:20.190 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.190 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:20.190 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.190 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.450 16:07:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.709 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.969 { 00:20:20.969 "cntlid": 85, 00:20:20.969 "qid": 0, 00:20:20.969 "state": "enabled", 00:20:20.969 "thread": "nvmf_tgt_poll_group_000", 00:20:20.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:20.969 "listen_address": { 00:20:20.969 "trtype": "RDMA", 00:20:20.969 "adrfam": "IPv4", 00:20:20.969 "traddr": "192.168.100.8", 00:20:20.969 "trsvcid": "4420" 00:20:20.969 }, 00:20:20.969 "peer_address": { 00:20:20.969 "trtype": "RDMA", 00:20:20.969 "adrfam": "IPv4", 00:20:20.969 "traddr": "192.168.100.8", 00:20:20.969 "trsvcid": "59353" 00:20:20.969 }, 00:20:20.969 "auth": { 00:20:20.969 "state": "completed", 00:20:20.969 "digest": "sha384", 00:20:20.969 "dhgroup": "ffdhe6144" 00:20:20.969 } 00:20:20.969 } 00:20:20.969 ]' 00:20:20.969 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.228 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.486 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:21.486 16:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.055 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.315 16:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.575 00:20:22.575 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.576 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.576 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.836 { 00:20:22.836 "cntlid": 87, 00:20:22.836 "qid": 0, 00:20:22.836 "state": "enabled", 00:20:22.836 "thread": "nvmf_tgt_poll_group_000", 00:20:22.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:22.836 "listen_address": { 00:20:22.836 "trtype": "RDMA", 00:20:22.836 "adrfam": "IPv4", 00:20:22.836 "traddr": "192.168.100.8", 00:20:22.836 "trsvcid": "4420" 00:20:22.836 }, 00:20:22.836 "peer_address": { 00:20:22.836 "trtype": "RDMA", 00:20:22.836 "adrfam": "IPv4", 00:20:22.836 "traddr": "192.168.100.8", 00:20:22.836 "trsvcid": "43667" 00:20:22.836 }, 00:20:22.836 "auth": { 00:20:22.836 "state": "completed", 00:20:22.836 "digest": "sha384", 00:20:22.836 "dhgroup": "ffdhe6144" 00:20:22.836 } 00:20:22.836 } 00:20:22.836 ]' 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.836 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.095 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.095 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.095 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.096 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.096 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.096 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:23.096 16:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:24.033 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.034 16:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.603 00:20:24.603 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.603 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.603 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.862 { 00:20:24.862 "cntlid": 89, 00:20:24.862 "qid": 0, 00:20:24.862 "state": "enabled", 00:20:24.862 "thread": "nvmf_tgt_poll_group_000", 00:20:24.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:24.862 "listen_address": { 00:20:24.862 "trtype": "RDMA", 00:20:24.862 "adrfam": "IPv4", 00:20:24.862 "traddr": "192.168.100.8", 00:20:24.862 "trsvcid": "4420" 00:20:24.862 }, 00:20:24.862 "peer_address": { 00:20:24.862 "trtype": "RDMA", 00:20:24.862 "adrfam": "IPv4", 00:20:24.862 "traddr": "192.168.100.8", 00:20:24.862 "trsvcid": "38365" 00:20:24.862 }, 00:20:24.862 "auth": { 00:20:24.862 "state": "completed", 00:20:24.862 "digest": "sha384", 00:20:24.862 "dhgroup": "ffdhe8192" 00:20:24.862 } 00:20:24.862 } 00:20:24.862 ]' 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.862 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.121 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:25.121 16:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:25.690 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.949 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.950 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.950 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.518 00:20:26.518 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.518 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.518 16:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.776 { 00:20:26.776 "cntlid": 91, 00:20:26.776 "qid": 0, 00:20:26.776 "state": "enabled", 00:20:26.776 "thread": "nvmf_tgt_poll_group_000", 00:20:26.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:26.776 "listen_address": { 00:20:26.776 "trtype": "RDMA", 00:20:26.776 "adrfam": "IPv4", 00:20:26.776 "traddr": "192.168.100.8", 00:20:26.776 "trsvcid": "4420" 00:20:26.776 }, 00:20:26.776 "peer_address": { 00:20:26.776 "trtype": "RDMA", 00:20:26.776 "adrfam": "IPv4", 00:20:26.776 "traddr": "192.168.100.8", 00:20:26.776 "trsvcid": "53930" 00:20:26.776 }, 00:20:26.776 "auth": { 00:20:26.776 "state": "completed", 00:20:26.776 "digest": "sha384", 00:20:26.776 "dhgroup": "ffdhe8192" 00:20:26.776 } 00:20:26.776 } 00:20:26.776 ]' 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.776 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.035 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:27.035 16:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:27.603 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.862 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:27.863 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.122 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.381 00:20:28.381 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.381 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.381 16:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.640 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.640 { 00:20:28.640 "cntlid": 93, 00:20:28.640 "qid": 0, 00:20:28.640 "state": "enabled", 00:20:28.640 "thread": "nvmf_tgt_poll_group_000", 00:20:28.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:28.640 "listen_address": { 00:20:28.640 "trtype": "RDMA", 00:20:28.640 "adrfam": "IPv4", 00:20:28.640 "traddr": "192.168.100.8", 00:20:28.640 "trsvcid": "4420" 00:20:28.640 }, 00:20:28.640 "peer_address": { 00:20:28.641 "trtype": "RDMA", 00:20:28.641 "adrfam": "IPv4", 00:20:28.641 "traddr": "192.168.100.8", 00:20:28.641 "trsvcid": "52320" 00:20:28.641 }, 00:20:28.641 "auth": { 00:20:28.641 "state": "completed", 00:20:28.641 "digest": "sha384", 00:20:28.641 "dhgroup": "ffdhe8192" 00:20:28.641 } 00:20:28.641 } 00:20:28.641 ]' 00:20:28.641 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.641 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.641 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.641 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.641 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.900 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.900 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.900 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.900 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:28.900 16:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:29.468 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.727 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.987 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.246 00:20:30.246 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.246 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.246 16:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.506 { 00:20:30.506 "cntlid": 95, 00:20:30.506 "qid": 0, 00:20:30.506 "state": "enabled", 00:20:30.506 "thread": "nvmf_tgt_poll_group_000", 00:20:30.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:30.506 "listen_address": { 00:20:30.506 "trtype": "RDMA", 00:20:30.506 "adrfam": "IPv4", 00:20:30.506 "traddr": "192.168.100.8", 00:20:30.506 "trsvcid": "4420" 00:20:30.506 }, 00:20:30.506 "peer_address": { 00:20:30.506 "trtype": "RDMA", 00:20:30.506 "adrfam": "IPv4", 00:20:30.506 "traddr": "192.168.100.8", 00:20:30.506 "trsvcid": "58806" 00:20:30.506 }, 00:20:30.506 "auth": { 00:20:30.506 "state": "completed", 00:20:30.506 "digest": "sha384", 00:20:30.506 "dhgroup": "ffdhe8192" 00:20:30.506 } 00:20:30.506 } 00:20:30.506 ]' 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.506 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.766 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.766 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.766 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.766 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.766 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.025 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:31.025 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:31.594 16:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.595 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.854 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.113 00:20:32.113 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.113 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.113 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.373 { 00:20:32.373 "cntlid": 97, 00:20:32.373 "qid": 0, 00:20:32.373 "state": "enabled", 00:20:32.373 "thread": "nvmf_tgt_poll_group_000", 00:20:32.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:32.373 "listen_address": { 00:20:32.373 "trtype": "RDMA", 00:20:32.373 "adrfam": "IPv4", 00:20:32.373 "traddr": "192.168.100.8", 00:20:32.373 "trsvcid": "4420" 00:20:32.373 }, 00:20:32.373 "peer_address": { 00:20:32.373 "trtype": "RDMA", 00:20:32.373 "adrfam": "IPv4", 00:20:32.373 "traddr": "192.168.100.8", 00:20:32.373 "trsvcid": "57881" 00:20:32.373 }, 00:20:32.373 "auth": { 00:20:32.373 "state": "completed", 00:20:32.373 "digest": "sha512", 00:20:32.373 "dhgroup": "null" 00:20:32.373 } 00:20:32.373 } 00:20:32.373 ]' 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.373 16:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.700 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:32.700 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.297 16:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.557 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.817 00:20:33.817 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.817 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.817 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.076 { 00:20:34.076 "cntlid": 99, 00:20:34.076 "qid": 0, 00:20:34.076 "state": "enabled", 00:20:34.076 "thread": "nvmf_tgt_poll_group_000", 00:20:34.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:34.076 "listen_address": { 00:20:34.076 "trtype": "RDMA", 00:20:34.076 "adrfam": "IPv4", 00:20:34.076 "traddr": "192.168.100.8", 00:20:34.076 "trsvcid": "4420" 00:20:34.076 }, 00:20:34.076 "peer_address": { 00:20:34.076 "trtype": "RDMA", 00:20:34.076 "adrfam": "IPv4", 00:20:34.076 "traddr": "192.168.100.8", 00:20:34.076 "trsvcid": "42970" 00:20:34.076 }, 00:20:34.076 "auth": { 00:20:34.076 "state": "completed", 00:20:34.076 "digest": "sha512", 00:20:34.076 "dhgroup": "null" 00:20:34.076 } 00:20:34.076 } 00:20:34.076 ]' 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.076 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.335 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:34.335 16:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:34.904 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.163 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.423 00:20:35.423 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.423 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.423 16:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.682 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.683 { 00:20:35.683 "cntlid": 101, 00:20:35.683 "qid": 0, 00:20:35.683 "state": "enabled", 00:20:35.683 "thread": "nvmf_tgt_poll_group_000", 00:20:35.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:35.683 "listen_address": { 00:20:35.683 "trtype": "RDMA", 00:20:35.683 "adrfam": "IPv4", 00:20:35.683 "traddr": "192.168.100.8", 00:20:35.683 "trsvcid": "4420" 00:20:35.683 }, 00:20:35.683 "peer_address": { 00:20:35.683 "trtype": "RDMA", 00:20:35.683 "adrfam": "IPv4", 00:20:35.683 "traddr": "192.168.100.8", 00:20:35.683 "trsvcid": "57707" 00:20:35.683 }, 00:20:35.683 "auth": { 00:20:35.683 "state": "completed", 00:20:35.683 "digest": "sha512", 00:20:35.683 "dhgroup": "null" 00:20:35.683 } 00:20:35.683 } 00:20:35.683 ]' 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.683 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.942 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.942 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.942 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.942 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:35.942 16:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.880 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.139 00:20:37.139 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.139 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.139 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.397 { 00:20:37.397 "cntlid": 103, 00:20:37.397 "qid": 0, 00:20:37.397 "state": "enabled", 00:20:37.397 "thread": "nvmf_tgt_poll_group_000", 00:20:37.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:37.397 "listen_address": { 00:20:37.397 "trtype": "RDMA", 00:20:37.397 "adrfam": "IPv4", 00:20:37.397 "traddr": "192.168.100.8", 00:20:37.397 "trsvcid": "4420" 00:20:37.397 }, 00:20:37.397 "peer_address": { 00:20:37.397 "trtype": "RDMA", 00:20:37.397 "adrfam": "IPv4", 00:20:37.397 "traddr": "192.168.100.8", 00:20:37.397 "trsvcid": "47460" 00:20:37.397 }, 00:20:37.397 "auth": { 00:20:37.397 "state": "completed", 00:20:37.397 "digest": "sha512", 00:20:37.397 "dhgroup": "null" 00:20:37.397 } 00:20:37.397 } 00:20:37.397 ]' 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.397 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.657 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.657 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.657 16:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.657 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:37.657 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:38.225 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.484 16:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.744 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.004 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.004 { 00:20:39.004 "cntlid": 105, 00:20:39.004 "qid": 0, 00:20:39.004 "state": "enabled", 00:20:39.004 "thread": "nvmf_tgt_poll_group_000", 00:20:39.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.004 "listen_address": { 00:20:39.004 "trtype": "RDMA", 00:20:39.004 "adrfam": "IPv4", 00:20:39.004 "traddr": "192.168.100.8", 00:20:39.004 "trsvcid": "4420" 00:20:39.004 }, 00:20:39.004 "peer_address": { 00:20:39.004 "trtype": "RDMA", 00:20:39.004 "adrfam": "IPv4", 00:20:39.004 "traddr": "192.168.100.8", 00:20:39.004 "trsvcid": "56129" 00:20:39.004 }, 00:20:39.004 "auth": { 00:20:39.004 "state": "completed", 00:20:39.004 "digest": "sha512", 00:20:39.004 "dhgroup": "ffdhe2048" 00:20:39.004 } 00:20:39.004 } 00:20:39.004 ]' 00:20:39.004 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.262 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.263 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.520 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:39.520 16:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.088 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.347 16:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.607 00:20:40.607 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.607 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.607 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.866 { 00:20:40.866 "cntlid": 107, 00:20:40.866 "qid": 0, 00:20:40.866 "state": "enabled", 00:20:40.866 "thread": "nvmf_tgt_poll_group_000", 00:20:40.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:40.866 "listen_address": { 00:20:40.866 "trtype": "RDMA", 00:20:40.866 "adrfam": "IPv4", 00:20:40.866 "traddr": "192.168.100.8", 00:20:40.866 "trsvcid": "4420" 00:20:40.866 }, 00:20:40.866 "peer_address": { 00:20:40.866 "trtype": "RDMA", 00:20:40.866 "adrfam": "IPv4", 00:20:40.866 "traddr": "192.168.100.8", 00:20:40.866 "trsvcid": "49424" 00:20:40.866 }, 00:20:40.866 "auth": { 00:20:40.866 "state": "completed", 00:20:40.866 "digest": "sha512", 00:20:40.866 "dhgroup": "ffdhe2048" 00:20:40.866 } 00:20:40.866 } 00:20:40.866 ]' 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.866 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.125 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:41.125 16:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:41.692 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.950 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.209 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.468 00:20:42.468 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.468 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.468 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.468 16:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.468 { 00:20:42.468 "cntlid": 109, 00:20:42.468 "qid": 0, 00:20:42.468 "state": "enabled", 00:20:42.468 "thread": "nvmf_tgt_poll_group_000", 00:20:42.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:42.468 "listen_address": { 00:20:42.468 "trtype": "RDMA", 00:20:42.468 "adrfam": "IPv4", 00:20:42.468 "traddr": "192.168.100.8", 00:20:42.468 "trsvcid": "4420" 00:20:42.468 }, 00:20:42.468 "peer_address": { 00:20:42.468 "trtype": "RDMA", 00:20:42.468 "adrfam": "IPv4", 00:20:42.468 "traddr": "192.168.100.8", 00:20:42.468 "trsvcid": "47925" 00:20:42.468 }, 00:20:42.468 "auth": { 00:20:42.468 "state": "completed", 00:20:42.468 "digest": "sha512", 00:20:42.468 "dhgroup": "ffdhe2048" 00:20:42.468 } 00:20:42.468 } 00:20:42.468 ]' 00:20:42.468 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.727 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.986 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:42.986 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:43.605 16:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.605 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.864 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.124 00:20:44.124 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.124 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.124 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.382 { 00:20:44.382 "cntlid": 111, 00:20:44.382 "qid": 0, 00:20:44.382 "state": "enabled", 00:20:44.382 "thread": "nvmf_tgt_poll_group_000", 00:20:44.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:44.382 "listen_address": { 00:20:44.382 "trtype": "RDMA", 00:20:44.382 "adrfam": "IPv4", 00:20:44.382 "traddr": "192.168.100.8", 00:20:44.382 "trsvcid": "4420" 00:20:44.382 }, 00:20:44.382 "peer_address": { 00:20:44.382 "trtype": "RDMA", 00:20:44.382 "adrfam": "IPv4", 00:20:44.382 "traddr": "192.168.100.8", 00:20:44.382 "trsvcid": "60130" 00:20:44.382 }, 00:20:44.382 "auth": { 00:20:44.382 "state": "completed", 00:20:44.382 "digest": "sha512", 00:20:44.382 "dhgroup": "ffdhe2048" 00:20:44.382 } 00:20:44.382 } 00:20:44.382 ]' 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.382 16:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.641 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:44.641 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:45.208 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.466 16:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.725 00:20:45.725 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.725 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.725 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.984 { 00:20:45.984 "cntlid": 113, 00:20:45.984 "qid": 0, 00:20:45.984 "state": "enabled", 00:20:45.984 "thread": "nvmf_tgt_poll_group_000", 00:20:45.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:45.984 "listen_address": { 00:20:45.984 "trtype": "RDMA", 00:20:45.984 "adrfam": "IPv4", 00:20:45.984 "traddr": "192.168.100.8", 00:20:45.984 "trsvcid": "4420" 00:20:45.984 }, 00:20:45.984 "peer_address": { 00:20:45.984 "trtype": "RDMA", 00:20:45.984 "adrfam": "IPv4", 00:20:45.984 "traddr": "192.168.100.8", 00:20:45.984 "trsvcid": "53037" 00:20:45.984 }, 00:20:45.984 "auth": { 00:20:45.984 "state": "completed", 00:20:45.984 "digest": "sha512", 00:20:45.984 "dhgroup": "ffdhe3072" 00:20:45.984 } 00:20:45.984 } 00:20:45.984 ]' 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.984 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.243 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.243 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.243 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.243 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:46.243 16:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:47.179 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.179 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:47.179 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.179 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.180 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.438 00:20:47.438 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.438 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.438 16:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.696 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.696 { 00:20:47.696 "cntlid": 115, 00:20:47.696 "qid": 0, 00:20:47.696 "state": "enabled", 00:20:47.696 "thread": "nvmf_tgt_poll_group_000", 00:20:47.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:47.696 "listen_address": { 00:20:47.696 "trtype": "RDMA", 00:20:47.696 "adrfam": "IPv4", 00:20:47.696 "traddr": "192.168.100.8", 00:20:47.697 "trsvcid": "4420" 00:20:47.697 }, 00:20:47.697 "peer_address": { 00:20:47.697 "trtype": "RDMA", 00:20:47.697 "adrfam": "IPv4", 00:20:47.697 "traddr": "192.168.100.8", 00:20:47.697 "trsvcid": "35251" 00:20:47.697 }, 00:20:47.697 "auth": { 00:20:47.697 "state": "completed", 00:20:47.697 "digest": "sha512", 00:20:47.697 "dhgroup": "ffdhe3072" 00:20:47.697 } 00:20:47.697 } 00:20:47.697 ]' 00:20:47.697 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.697 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.697 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.697 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.697 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.956 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.956 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.956 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.956 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:47.956 16:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:48.525 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.784 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.042 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.301 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.301 { 00:20:49.301 "cntlid": 117, 00:20:49.301 "qid": 0, 00:20:49.301 "state": "enabled", 00:20:49.301 "thread": "nvmf_tgt_poll_group_000", 00:20:49.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:49.301 "listen_address": { 00:20:49.301 "trtype": "RDMA", 00:20:49.301 "adrfam": "IPv4", 00:20:49.301 "traddr": "192.168.100.8", 00:20:49.301 "trsvcid": "4420" 00:20:49.301 }, 00:20:49.301 "peer_address": { 00:20:49.301 "trtype": "RDMA", 00:20:49.301 "adrfam": "IPv4", 00:20:49.301 "traddr": "192.168.100.8", 00:20:49.301 "trsvcid": "58628" 00:20:49.301 }, 00:20:49.301 "auth": { 00:20:49.301 "state": "completed", 00:20:49.301 "digest": "sha512", 00:20:49.301 "dhgroup": "ffdhe3072" 00:20:49.301 } 00:20:49.301 } 00:20:49.301 ]' 00:20:49.301 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.561 16:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.820 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:49.820 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.389 16:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.649 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.908 00:20:50.908 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.908 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.908 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.167 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.167 { 00:20:51.167 "cntlid": 119, 00:20:51.167 "qid": 0, 00:20:51.167 "state": "enabled", 00:20:51.167 "thread": "nvmf_tgt_poll_group_000", 00:20:51.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:51.168 "listen_address": { 00:20:51.168 "trtype": "RDMA", 00:20:51.168 "adrfam": "IPv4", 00:20:51.168 "traddr": "192.168.100.8", 00:20:51.168 "trsvcid": "4420" 00:20:51.168 }, 00:20:51.168 "peer_address": { 00:20:51.168 "trtype": "RDMA", 00:20:51.168 "adrfam": "IPv4", 00:20:51.168 "traddr": "192.168.100.8", 00:20:51.168 "trsvcid": "39142" 00:20:51.168 }, 00:20:51.168 "auth": { 00:20:51.168 "state": "completed", 00:20:51.168 "digest": "sha512", 00:20:51.168 "dhgroup": "ffdhe3072" 00:20:51.168 } 00:20:51.168 } 00:20:51.168 ]' 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.168 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.427 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:51.427 16:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:51.995 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.254 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.514 16:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.773 00:20:52.773 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.773 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.773 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.773 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.773 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.774 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.774 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.774 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.774 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.774 { 00:20:52.774 "cntlid": 121, 00:20:52.774 "qid": 0, 00:20:52.774 "state": "enabled", 00:20:52.774 "thread": "nvmf_tgt_poll_group_000", 00:20:52.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:52.774 "listen_address": { 00:20:52.774 "trtype": "RDMA", 00:20:52.774 "adrfam": "IPv4", 00:20:52.774 "traddr": "192.168.100.8", 00:20:52.774 "trsvcid": "4420" 00:20:52.774 }, 00:20:52.774 "peer_address": { 00:20:52.774 "trtype": "RDMA", 00:20:52.774 "adrfam": "IPv4", 00:20:52.774 "traddr": "192.168.100.8", 00:20:52.774 "trsvcid": "53497" 00:20:52.774 }, 00:20:52.774 "auth": { 00:20:52.774 "state": "completed", 00:20:52.774 "digest": "sha512", 00:20:52.774 "dhgroup": "ffdhe4096" 00:20:52.774 } 00:20:52.774 } 00:20:52.774 ]' 00:20:52.774 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.032 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.291 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:53.291 16:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.859 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.118 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.378 00:20:54.378 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.378 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.378 16:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.637 { 00:20:54.637 "cntlid": 123, 00:20:54.637 "qid": 0, 00:20:54.637 "state": "enabled", 00:20:54.637 "thread": "nvmf_tgt_poll_group_000", 00:20:54.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:54.637 "listen_address": { 00:20:54.637 "trtype": "RDMA", 00:20:54.637 "adrfam": "IPv4", 00:20:54.637 "traddr": "192.168.100.8", 00:20:54.637 "trsvcid": "4420" 00:20:54.637 }, 00:20:54.637 "peer_address": { 00:20:54.637 "trtype": "RDMA", 00:20:54.637 "adrfam": "IPv4", 00:20:54.637 "traddr": "192.168.100.8", 00:20:54.637 "trsvcid": "44688" 00:20:54.637 }, 00:20:54.637 "auth": { 00:20:54.637 "state": "completed", 00:20:54.637 "digest": "sha512", 00:20:54.637 "dhgroup": "ffdhe4096" 00:20:54.637 } 00:20:54.637 } 00:20:54.637 ]' 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.637 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.896 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.896 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.896 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.896 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:54.896 16:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:20:55.464 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.724 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.983 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.243 00:20:56.243 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.243 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.243 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.579 { 00:20:56.579 "cntlid": 125, 00:20:56.579 "qid": 0, 00:20:56.579 "state": "enabled", 00:20:56.579 "thread": "nvmf_tgt_poll_group_000", 00:20:56.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:56.579 "listen_address": { 00:20:56.579 "trtype": "RDMA", 00:20:56.579 "adrfam": "IPv4", 00:20:56.579 "traddr": "192.168.100.8", 00:20:56.579 "trsvcid": "4420" 00:20:56.579 }, 00:20:56.579 "peer_address": { 00:20:56.579 "trtype": "RDMA", 00:20:56.579 "adrfam": "IPv4", 00:20:56.579 "traddr": "192.168.100.8", 00:20:56.579 "trsvcid": "36079" 00:20:56.579 }, 00:20:56.579 "auth": { 00:20:56.579 "state": "completed", 00:20:56.579 "digest": "sha512", 00:20:56.579 "dhgroup": "ffdhe4096" 00:20:56.579 } 00:20:56.579 } 00:20:56.579 ]' 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.579 16:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.838 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:56.838 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.404 16:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.663 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.664 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.664 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.664 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.923 00:20:57.923 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.923 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.923 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.181 { 00:20:58.181 "cntlid": 127, 00:20:58.181 "qid": 0, 00:20:58.181 "state": "enabled", 00:20:58.181 "thread": "nvmf_tgt_poll_group_000", 00:20:58.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:58.181 "listen_address": { 00:20:58.181 "trtype": "RDMA", 00:20:58.181 "adrfam": "IPv4", 00:20:58.181 "traddr": "192.168.100.8", 00:20:58.181 "trsvcid": "4420" 00:20:58.181 }, 00:20:58.181 "peer_address": { 00:20:58.181 "trtype": "RDMA", 00:20:58.181 "adrfam": "IPv4", 00:20:58.181 "traddr": "192.168.100.8", 00:20:58.181 "trsvcid": "37889" 00:20:58.181 }, 00:20:58.181 "auth": { 00:20:58.181 "state": "completed", 00:20:58.181 "digest": "sha512", 00:20:58.181 "dhgroup": "ffdhe4096" 00:20:58.181 } 00:20:58.181 } 00:20:58.181 ]' 00:20:58.181 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.182 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.440 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:58.440 16:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:20:59.007 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.267 16:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.836 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.836 { 00:20:59.836 "cntlid": 129, 00:20:59.836 "qid": 0, 00:20:59.836 "state": "enabled", 00:20:59.836 "thread": "nvmf_tgt_poll_group_000", 00:20:59.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:59.836 "listen_address": { 00:20:59.836 "trtype": "RDMA", 00:20:59.836 "adrfam": "IPv4", 00:20:59.836 "traddr": "192.168.100.8", 00:20:59.836 "trsvcid": "4420" 00:20:59.836 }, 00:20:59.836 "peer_address": { 00:20:59.836 "trtype": "RDMA", 00:20:59.836 "adrfam": "IPv4", 00:20:59.836 "traddr": "192.168.100.8", 00:20:59.836 "trsvcid": "49726" 00:20:59.836 }, 00:20:59.836 "auth": { 00:20:59.836 "state": "completed", 00:20:59.836 "digest": "sha512", 00:20:59.836 "dhgroup": "ffdhe6144" 00:20:59.836 } 00:20:59.836 } 00:20:59.836 ]' 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.836 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.094 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.094 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.094 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.094 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.094 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.095 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:00.095 16:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.032 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.601 00:21:01.601 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.601 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.601 16:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.601 { 00:21:01.601 "cntlid": 131, 00:21:01.601 "qid": 0, 00:21:01.601 "state": "enabled", 00:21:01.601 "thread": "nvmf_tgt_poll_group_000", 00:21:01.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:01.601 "listen_address": { 00:21:01.601 "trtype": "RDMA", 00:21:01.601 "adrfam": "IPv4", 00:21:01.601 "traddr": "192.168.100.8", 00:21:01.601 "trsvcid": "4420" 00:21:01.601 }, 00:21:01.601 "peer_address": { 00:21:01.601 "trtype": "RDMA", 00:21:01.601 "adrfam": "IPv4", 00:21:01.601 "traddr": "192.168.100.8", 00:21:01.601 "trsvcid": "60411" 00:21:01.601 }, 00:21:01.601 "auth": { 00:21:01.601 "state": "completed", 00:21:01.601 "digest": "sha512", 00:21:01.601 "dhgroup": "ffdhe6144" 00:21:01.601 } 00:21:01.601 } 00:21:01.601 ]' 00:21:01.601 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.861 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.120 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:21:02.120 16:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.689 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.949 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.211 00:21:03.211 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.211 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.211 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.470 { 00:21:03.470 "cntlid": 133, 00:21:03.470 "qid": 0, 00:21:03.470 "state": "enabled", 00:21:03.470 "thread": "nvmf_tgt_poll_group_000", 00:21:03.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:03.470 "listen_address": { 00:21:03.470 "trtype": "RDMA", 00:21:03.470 "adrfam": "IPv4", 00:21:03.470 "traddr": "192.168.100.8", 00:21:03.470 "trsvcid": "4420" 00:21:03.470 }, 00:21:03.470 "peer_address": { 00:21:03.470 "trtype": "RDMA", 00:21:03.470 "adrfam": "IPv4", 00:21:03.470 "traddr": "192.168.100.8", 00:21:03.470 "trsvcid": "40719" 00:21:03.470 }, 00:21:03.470 "auth": { 00:21:03.470 "state": "completed", 00:21:03.470 "digest": "sha512", 00:21:03.470 "dhgroup": "ffdhe6144" 00:21:03.470 } 00:21:03.470 } 00:21:03.470 ]' 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.470 16:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.470 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.470 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.730 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.730 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.730 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.730 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:21:03.730 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:21:04.299 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.558 16:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.817 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.076 00:21:05.076 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.076 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.076 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.335 { 00:21:05.335 "cntlid": 135, 00:21:05.335 "qid": 0, 00:21:05.335 "state": "enabled", 00:21:05.335 "thread": "nvmf_tgt_poll_group_000", 00:21:05.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:05.335 "listen_address": { 00:21:05.335 "trtype": "RDMA", 00:21:05.335 "adrfam": "IPv4", 00:21:05.335 "traddr": "192.168.100.8", 00:21:05.335 "trsvcid": "4420" 00:21:05.335 }, 00:21:05.335 "peer_address": { 00:21:05.335 "trtype": "RDMA", 00:21:05.335 "adrfam": "IPv4", 00:21:05.335 "traddr": "192.168.100.8", 00:21:05.335 "trsvcid": "44700" 00:21:05.335 }, 00:21:05.335 "auth": { 00:21:05.335 "state": "completed", 00:21:05.335 "digest": "sha512", 00:21:05.335 "dhgroup": "ffdhe6144" 00:21:05.335 } 00:21:05.335 } 00:21:05.335 ]' 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.335 16:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.594 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:05.594 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:06.162 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.421 16:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.681 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.250 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.250 { 00:21:07.250 "cntlid": 137, 00:21:07.250 "qid": 0, 00:21:07.250 "state": "enabled", 00:21:07.250 "thread": "nvmf_tgt_poll_group_000", 00:21:07.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:07.250 "listen_address": { 00:21:07.250 "trtype": "RDMA", 00:21:07.250 "adrfam": "IPv4", 00:21:07.250 "traddr": "192.168.100.8", 00:21:07.250 "trsvcid": "4420" 00:21:07.250 }, 00:21:07.250 "peer_address": { 00:21:07.250 "trtype": "RDMA", 00:21:07.250 "adrfam": "IPv4", 00:21:07.250 "traddr": "192.168.100.8", 00:21:07.250 "trsvcid": "52751" 00:21:07.250 }, 00:21:07.250 "auth": { 00:21:07.250 "state": "completed", 00:21:07.250 "digest": "sha512", 00:21:07.250 "dhgroup": "ffdhe8192" 00:21:07.250 } 00:21:07.250 } 00:21:07.250 ]' 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.250 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.509 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.509 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.509 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.509 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.509 16:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.768 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:07.768 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.337 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.596 16:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.596 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.596 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.596 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.596 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.164 00:21:09.164 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.164 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.165 { 00:21:09.165 "cntlid": 139, 00:21:09.165 "qid": 0, 00:21:09.165 "state": "enabled", 00:21:09.165 "thread": "nvmf_tgt_poll_group_000", 00:21:09.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:09.165 "listen_address": { 00:21:09.165 "trtype": "RDMA", 00:21:09.165 "adrfam": "IPv4", 00:21:09.165 "traddr": "192.168.100.8", 00:21:09.165 "trsvcid": "4420" 00:21:09.165 }, 00:21:09.165 "peer_address": { 00:21:09.165 "trtype": "RDMA", 00:21:09.165 "adrfam": "IPv4", 00:21:09.165 "traddr": "192.168.100.8", 00:21:09.165 "trsvcid": "43740" 00:21:09.165 }, 00:21:09.165 "auth": { 00:21:09.165 "state": "completed", 00:21:09.165 "digest": "sha512", 00:21:09.165 "dhgroup": "ffdhe8192" 00:21:09.165 } 00:21:09.165 } 00:21:09.165 ]' 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.165 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:21:09.424 16:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: --dhchap-ctrl-secret DHHC-1:02:YzU0Zjk4MWFhOTg1NjUxMjY0NWIwMTQ5ZTM5OWJmMmIwMTE4ZjU4ZWQwMDhjNDVkJGoQcg==: 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.362 16:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.931 00:21:10.931 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.931 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.931 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.191 { 00:21:11.191 "cntlid": 141, 00:21:11.191 "qid": 0, 00:21:11.191 "state": "enabled", 00:21:11.191 "thread": "nvmf_tgt_poll_group_000", 00:21:11.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:11.191 "listen_address": { 00:21:11.191 "trtype": "RDMA", 00:21:11.191 "adrfam": "IPv4", 00:21:11.191 "traddr": "192.168.100.8", 00:21:11.191 "trsvcid": "4420" 00:21:11.191 }, 00:21:11.191 "peer_address": { 00:21:11.191 "trtype": "RDMA", 00:21:11.191 "adrfam": "IPv4", 00:21:11.191 "traddr": "192.168.100.8", 00:21:11.191 "trsvcid": "33229" 00:21:11.191 }, 00:21:11.191 "auth": { 00:21:11.191 "state": "completed", 00:21:11.191 "digest": "sha512", 00:21:11.191 "dhgroup": "ffdhe8192" 00:21:11.191 } 00:21:11.191 } 00:21:11.191 ]' 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.191 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.451 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:21:11.451 16:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:01:YjAzMDExYmRiZjQ4Zjg2MDkxMjFiNjYwYTUzZDRhYmMenGiL: 00:21:12.018 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.277 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.535 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:12.535 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.535 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.535 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:12.535 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.536 16:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.795 00:21:12.795 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.795 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.795 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.053 { 00:21:13.053 "cntlid": 143, 00:21:13.053 "qid": 0, 00:21:13.053 "state": "enabled", 00:21:13.053 "thread": "nvmf_tgt_poll_group_000", 00:21:13.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:13.053 "listen_address": { 00:21:13.053 "trtype": "RDMA", 00:21:13.053 "adrfam": "IPv4", 00:21:13.053 "traddr": "192.168.100.8", 00:21:13.053 "trsvcid": "4420" 00:21:13.053 }, 00:21:13.053 "peer_address": { 00:21:13.053 "trtype": "RDMA", 00:21:13.053 "adrfam": "IPv4", 00:21:13.053 "traddr": "192.168.100.8", 00:21:13.053 "trsvcid": "43357" 00:21:13.053 }, 00:21:13.053 "auth": { 00:21:13.053 "state": "completed", 00:21:13.053 "digest": "sha512", 00:21:13.053 "dhgroup": "ffdhe8192" 00:21:13.053 } 00:21:13.053 } 00:21:13.053 ]' 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.053 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.312 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.312 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.312 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.312 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:13.312 16:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:13.879 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.138 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.397 16:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.966 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.966 { 00:21:14.966 "cntlid": 145, 00:21:14.966 "qid": 0, 00:21:14.966 "state": "enabled", 00:21:14.966 "thread": "nvmf_tgt_poll_group_000", 00:21:14.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:14.966 "listen_address": { 00:21:14.966 "trtype": "RDMA", 00:21:14.966 "adrfam": "IPv4", 00:21:14.966 "traddr": "192.168.100.8", 00:21:14.966 "trsvcid": "4420" 00:21:14.966 }, 00:21:14.966 "peer_address": { 00:21:14.966 "trtype": "RDMA", 00:21:14.966 "adrfam": "IPv4", 00:21:14.966 "traddr": "192.168.100.8", 00:21:14.966 "trsvcid": "56864" 00:21:14.966 }, 00:21:14.966 "auth": { 00:21:14.966 "state": "completed", 00:21:14.966 "digest": "sha512", 00:21:14.966 "dhgroup": "ffdhe8192" 00:21:14.966 } 00:21:14.966 } 00:21:14.966 ]' 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.966 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.224 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.225 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.225 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.225 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.225 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:15.225 16:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YmM2NjhlNzNlZTBkM2RlNGFjOGIwN2MxMmU4MGQzNzM1NWVjZDg4NWVkNDRjNGQ458jYMQ==: --dhchap-ctrl-secret DHHC-1:03:YzYzMzRkMGIwZmUzMmI0NDdiYjdjNjc5ODBlZGFjNjRkNzBhZGNhZGM4YWNkMjRjMzY4N2I5MGEwZTZiMWQyN7e/pmc=: 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:16.161 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:16.421 request: 00:21:16.421 { 00:21:16.421 "name": "nvme0", 00:21:16.421 "trtype": "rdma", 00:21:16.421 "traddr": "192.168.100.8", 00:21:16.421 "adrfam": "ipv4", 00:21:16.421 "trsvcid": "4420", 00:21:16.421 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:16.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:16.421 "prchk_reftag": false, 00:21:16.421 "prchk_guard": false, 00:21:16.421 "hdgst": false, 00:21:16.421 "ddgst": false, 00:21:16.421 "dhchap_key": "key2", 00:21:16.421 "allow_unrecognized_csi": false, 00:21:16.421 "method": "bdev_nvme_attach_controller", 00:21:16.421 "req_id": 1 00:21:16.421 } 00:21:16.421 Got JSON-RPC error response 00:21:16.421 response: 00:21:16.421 { 00:21:16.421 "code": -5, 00:21:16.421 "message": "Input/output error" 00:21:16.421 } 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.421 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:16.681 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.681 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:16.681 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:16.681 16:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:16.939 request: 00:21:16.939 { 00:21:16.939 "name": "nvme0", 00:21:16.939 "trtype": "rdma", 00:21:16.939 "traddr": "192.168.100.8", 00:21:16.939 "adrfam": "ipv4", 00:21:16.939 "trsvcid": "4420", 00:21:16.939 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:16.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:16.939 "prchk_reftag": false, 00:21:16.939 "prchk_guard": false, 00:21:16.939 "hdgst": false, 00:21:16.939 "ddgst": false, 00:21:16.939 "dhchap_key": "key1", 00:21:16.939 "dhchap_ctrlr_key": "ckey2", 00:21:16.939 "allow_unrecognized_csi": false, 00:21:16.940 "method": "bdev_nvme_attach_controller", 00:21:16.940 "req_id": 1 00:21:16.940 } 00:21:16.940 Got JSON-RPC error response 00:21:16.940 response: 00:21:16.940 { 00:21:16.940 "code": -5, 00:21:16.940 "message": "Input/output error" 00:21:16.940 } 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.940 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.508 request: 00:21:17.508 { 00:21:17.508 "name": "nvme0", 00:21:17.508 "trtype": "rdma", 00:21:17.508 "traddr": "192.168.100.8", 00:21:17.508 "adrfam": "ipv4", 00:21:17.508 "trsvcid": "4420", 00:21:17.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:17.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:17.508 "prchk_reftag": false, 00:21:17.508 "prchk_guard": false, 00:21:17.508 "hdgst": false, 00:21:17.508 "ddgst": false, 00:21:17.508 "dhchap_key": "key1", 00:21:17.508 "dhchap_ctrlr_key": "ckey1", 00:21:17.508 "allow_unrecognized_csi": false, 00:21:17.508 "method": "bdev_nvme_attach_controller", 00:21:17.508 "req_id": 1 00:21:17.508 } 00:21:17.508 Got JSON-RPC error response 00:21:17.508 response: 00:21:17.508 { 00:21:17.508 "code": -5, 00:21:17.508 "message": "Input/output error" 00:21:17.508 } 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2832870 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2832870 ']' 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2832870 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.508 16:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2832870 00:21:17.508 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.508 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.508 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2832870' 00:21:17.508 killing process with pid 2832870 00:21:17.508 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2832870 00:21:17.508 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2832870 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2856619 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2856619 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2856619 ']' 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.768 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2856619 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2856619 ']' 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.028 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.287 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.287 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:18.287 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:18.287 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.287 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.287 null0 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hmr 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.lpi ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lpi 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MrA 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.j7H ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j7H 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mvj 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Wos ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wos 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UQT 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.547 16:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.116 nvme0n1 00:21:19.116 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.116 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.116 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.375 { 00:21:19.375 "cntlid": 1, 00:21:19.375 "qid": 0, 00:21:19.375 "state": "enabled", 00:21:19.375 "thread": "nvmf_tgt_poll_group_000", 00:21:19.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:19.375 "listen_address": { 00:21:19.375 "trtype": "RDMA", 00:21:19.375 "adrfam": "IPv4", 00:21:19.375 "traddr": "192.168.100.8", 00:21:19.375 "trsvcid": "4420" 00:21:19.375 }, 00:21:19.375 "peer_address": { 00:21:19.375 "trtype": "RDMA", 00:21:19.375 "adrfam": "IPv4", 00:21:19.375 "traddr": "192.168.100.8", 00:21:19.375 "trsvcid": "52187" 00:21:19.375 }, 00:21:19.375 "auth": { 00:21:19.375 "state": "completed", 00:21:19.375 "digest": "sha512", 00:21:19.375 "dhgroup": "ffdhe8192" 00:21:19.375 } 00:21:19.375 } 00:21:19.375 ]' 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.375 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.635 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.635 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.635 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.635 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.635 16:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.894 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:19.894 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.471 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:20.472 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.472 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.472 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:20.472 16:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.790 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.049 request: 00:21:21.049 { 00:21:21.049 "name": "nvme0", 00:21:21.049 "trtype": "rdma", 00:21:21.049 "traddr": "192.168.100.8", 00:21:21.049 "adrfam": "ipv4", 00:21:21.049 "trsvcid": "4420", 00:21:21.049 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:21.049 "prchk_reftag": false, 00:21:21.049 "prchk_guard": false, 00:21:21.049 "hdgst": false, 00:21:21.049 "ddgst": false, 00:21:21.049 "dhchap_key": "key3", 00:21:21.049 "allow_unrecognized_csi": false, 00:21:21.049 "method": "bdev_nvme_attach_controller", 00:21:21.049 "req_id": 1 00:21:21.049 } 00:21:21.049 Got JSON-RPC error response 00:21:21.049 response: 00:21:21.049 { 00:21:21.049 "code": -5, 00:21:21.049 "message": "Input/output error" 00:21:21.049 } 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:21.049 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.309 request: 00:21:21.309 { 00:21:21.309 "name": "nvme0", 00:21:21.309 "trtype": "rdma", 00:21:21.309 "traddr": "192.168.100.8", 00:21:21.309 "adrfam": "ipv4", 00:21:21.309 "trsvcid": "4420", 00:21:21.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:21.309 "prchk_reftag": false, 00:21:21.309 "prchk_guard": false, 00:21:21.309 "hdgst": false, 00:21:21.309 "ddgst": false, 00:21:21.309 "dhchap_key": "key3", 00:21:21.309 "allow_unrecognized_csi": false, 00:21:21.309 "method": "bdev_nvme_attach_controller", 00:21:21.309 "req_id": 1 00:21:21.309 } 00:21:21.309 Got JSON-RPC error response 00:21:21.309 response: 00:21:21.309 { 00:21:21.309 "code": -5, 00:21:21.309 "message": "Input/output error" 00:21:21.309 } 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.309 16:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.569 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:22.137 request: 00:21:22.137 { 00:21:22.137 "name": "nvme0", 00:21:22.137 "trtype": "rdma", 00:21:22.137 "traddr": "192.168.100.8", 00:21:22.137 "adrfam": "ipv4", 00:21:22.137 "trsvcid": "4420", 00:21:22.137 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:22.137 "prchk_reftag": false, 00:21:22.137 "prchk_guard": false, 00:21:22.137 "hdgst": false, 00:21:22.137 "ddgst": false, 00:21:22.137 "dhchap_key": "key0", 00:21:22.137 "dhchap_ctrlr_key": "key1", 00:21:22.137 "allow_unrecognized_csi": false, 00:21:22.137 "method": "bdev_nvme_attach_controller", 00:21:22.137 "req_id": 1 00:21:22.137 } 00:21:22.137 Got JSON-RPC error response 00:21:22.137 response: 00:21:22.137 { 00:21:22.137 "code": -5, 00:21:22.137 "message": "Input/output error" 00:21:22.137 } 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:22.138 nvme0n1 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:22.138 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.397 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.397 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.397 16:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:22.656 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:23.594 nvme0n1 00:21:23.594 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:23.594 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:23.594 16:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:23.594 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.854 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.854 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:23.854 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: --dhchap-ctrl-secret DHHC-1:03:NGM5NGQ5YWY2ZTk5NjZmYmUxNmU5NmQ1M2I4YjgwYjBlMzFiMWNjMWY2YWIwZGU4OGQ3NDIxZWU2NWUyNWVjNvGyHXs=: 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.423 16:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:24.682 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:25.251 request: 00:21:25.251 { 00:21:25.251 "name": "nvme0", 00:21:25.251 "trtype": "rdma", 00:21:25.251 "traddr": "192.168.100.8", 00:21:25.251 "adrfam": "ipv4", 00:21:25.251 "trsvcid": "4420", 00:21:25.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:25.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:25.251 "prchk_reftag": false, 00:21:25.251 "prchk_guard": false, 00:21:25.251 "hdgst": false, 00:21:25.251 "ddgst": false, 00:21:25.251 "dhchap_key": "key1", 00:21:25.251 "allow_unrecognized_csi": false, 00:21:25.251 "method": "bdev_nvme_attach_controller", 00:21:25.251 "req_id": 1 00:21:25.251 } 00:21:25.251 Got JSON-RPC error response 00:21:25.251 response: 00:21:25.251 { 00:21:25.251 "code": -5, 00:21:25.251 "message": "Input/output error" 00:21:25.251 } 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:25.251 16:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:25.819 nvme0n1 00:21:25.819 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:25.819 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:25.819 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.078 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.337 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.337 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:26.337 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:26.337 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:26.337 nvme0n1 00:21:26.338 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:26.338 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:26.338 16:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.597 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.597 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.597 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: '' 2s 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: ]] 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWY3NWM3NTFmMmRjMjkzYTdjMGE1OGFhMTdmZjg2MzDCfc7f: 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:26.855 16:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:28.761 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:28.761 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:28.761 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:28.761 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: 2s 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: ]] 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTYyZGZhMThiOWJmZDFlM2VmNmQ0YzdiYmY1YWIzNTdiNTQ0MWE3MGFlNzVkMmQ2w68tdQ==: 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:29.020 16:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:30.925 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.184 16:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.750 nvme0n1 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.750 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:32.319 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:32.319 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:32.319 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:32.579 16:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:32.579 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:32.579 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:32.579 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.839 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:33.408 request: 00:21:33.408 { 00:21:33.408 "name": "nvme0", 00:21:33.408 "dhchap_key": "key1", 00:21:33.408 "dhchap_ctrlr_key": "key3", 00:21:33.408 "method": "bdev_nvme_set_keys", 00:21:33.408 "req_id": 1 00:21:33.408 } 00:21:33.408 Got JSON-RPC error response 00:21:33.408 response: 00:21:33.408 { 00:21:33.408 "code": -13, 00:21:33.408 "message": "Permission denied" 00:21:33.408 } 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.408 16:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:33.671 16:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:33.671 16:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:34.611 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:34.611 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:34.611 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:34.870 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:35.437 nvme0n1 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:35.437 16:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:36.006 request: 00:21:36.006 { 00:21:36.006 "name": "nvme0", 00:21:36.006 "dhchap_key": "key2", 00:21:36.006 "dhchap_ctrlr_key": "key0", 00:21:36.006 "method": "bdev_nvme_set_keys", 00:21:36.006 "req_id": 1 00:21:36.006 } 00:21:36.006 Got JSON-RPC error response 00:21:36.006 response: 00:21:36.006 { 00:21:36.006 "code": -13, 00:21:36.006 "message": "Permission denied" 00:21:36.006 } 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.006 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:36.265 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:36.265 16:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:37.202 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:37.203 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:37.203 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2832969 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2832969 ']' 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2832969 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2832969 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2832969' 00:21:37.462 killing process with pid 2832969 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2832969 00:21:37.462 16:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2832969 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:37.721 rmmod nvme_rdma 00:21:37.721 rmmod nvme_fabrics 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 2856619 ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 2856619 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2856619 ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2856619 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856619 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856619' 00:21:37.721 killing process with pid 2856619 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2856619 00:21:37.721 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2856619 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hmr /tmp/spdk.key-sha256.MrA /tmp/spdk.key-sha384.mvj /tmp/spdk.key-sha512.UQT /tmp/spdk.key-sha512.lpi /tmp/spdk.key-sha384.j7H /tmp/spdk.key-sha256.Wos '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:37.981 00:21:37.981 real 2m41.731s 00:21:37.981 user 6m11.275s 00:21:37.981 sys 0m24.128s 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.981 ************************************ 00:21:37.981 END TEST nvmf_auth_target 00:21:37.981 ************************************ 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.981 16:09:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.241 ************************************ 00:21:38.241 START TEST nvmf_fuzz 00:21:38.241 ************************************ 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:38.241 * Looking for test storage... 00:21:38.241 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.241 --rc genhtml_branch_coverage=1 00:21:38.241 --rc genhtml_function_coverage=1 00:21:38.241 --rc genhtml_legend=1 00:21:38.241 --rc geninfo_all_blocks=1 00:21:38.241 --rc geninfo_unexecuted_blocks=1 00:21:38.241 00:21:38.241 ' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.241 --rc genhtml_branch_coverage=1 00:21:38.241 --rc genhtml_function_coverage=1 00:21:38.241 --rc genhtml_legend=1 00:21:38.241 --rc geninfo_all_blocks=1 00:21:38.241 --rc geninfo_unexecuted_blocks=1 00:21:38.241 00:21:38.241 ' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.241 --rc genhtml_branch_coverage=1 00:21:38.241 --rc genhtml_function_coverage=1 00:21:38.241 --rc genhtml_legend=1 00:21:38.241 --rc geninfo_all_blocks=1 00:21:38.241 --rc geninfo_unexecuted_blocks=1 00:21:38.241 00:21:38.241 ' 00:21:38.241 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.241 --rc genhtml_branch_coverage=1 00:21:38.242 --rc genhtml_function_coverage=1 00:21:38.242 --rc genhtml_legend=1 00:21:38.242 --rc geninfo_all_blocks=1 00:21:38.242 --rc geninfo_unexecuted_blocks=1 00:21:38.242 00:21:38.242 ' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.242 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.242 16:09:06 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:21:44.816 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:44.817 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:44.817 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:44.817 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:44.817 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # rdma_device_init 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@526 -- # allocate_nic_ips 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:44.817 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:44.817 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:44.817 altname enp217s0f0np0 00:21:44.817 altname ens818f0np0 00:21:44.817 inet 192.168.100.8/24 scope global mlx_0_0 00:21:44.817 valid_lft forever preferred_lft forever 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:44.817 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:44.817 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:44.817 altname enp217s0f1np1 00:21:44.817 altname ens818f1np1 00:21:44.817 inet 192.168.100.9/24 scope global mlx_0_1 00:21:44.817 valid_lft forever preferred_lft forever 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:44.817 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:21:44.818 192.168.100.9' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:21:44.818 192.168.100.9' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # head -n 1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:21:44.818 192.168.100.9' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # tail -n +2 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # head -n 1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2863915 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2863915 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2863915 ']' 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.818 16:09:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 Malloc0 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:21:44.818 16:09:13 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:22:16.908 Fuzzing completed. Shutting down the fuzz application 00:22:16.908 00:22:16.908 Dumping successful admin opcodes: 00:22:16.908 8, 9, 10, 24, 00:22:16.908 Dumping successful io opcodes: 00:22:16.908 0, 9, 00:22:16.908 NS: 0x200003af1f00 I/O qp, Total commands completed: 994278, total successful commands: 5818, random_seed: 833585088 00:22:16.908 NS: 0x200003af1f00 admin qp, Total commands completed: 132943, total successful commands: 1080, random_seed: 2818079296 00:22:16.908 16:09:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:16.908 Fuzzing completed. Shutting down the fuzz application 00:22:16.908 00:22:16.908 Dumping successful admin opcodes: 00:22:16.908 24, 00:22:16.908 Dumping successful io opcodes: 00:22:16.908 00:22:16.908 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 545205618 00:22:16.908 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 545267718 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.908 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:16.909 rmmod nvme_rdma 00:22:16.909 rmmod nvme_fabrics 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 2863915 ']' 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 2863915 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2863915 ']' 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2863915 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.909 16:09:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2863915 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2863915' 00:22:16.909 killing process with pid 2863915 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2863915 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2863915 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:16.909 00:22:16.909 real 0m38.756s 00:22:16.909 user 0m49.311s 00:22:16.909 sys 0m20.366s 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:16.909 ************************************ 00:22:16.909 END TEST nvmf_fuzz 00:22:16.909 ************************************ 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.909 ************************************ 00:22:16.909 START TEST nvmf_multiconnection 00:22:16.909 ************************************ 00:22:16.909 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:17.227 * Looking for test storage... 00:22:17.227 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:17.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.227 --rc genhtml_branch_coverage=1 00:22:17.227 --rc genhtml_function_coverage=1 00:22:17.227 --rc genhtml_legend=1 00:22:17.227 --rc geninfo_all_blocks=1 00:22:17.227 --rc geninfo_unexecuted_blocks=1 00:22:17.227 00:22:17.227 ' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:17.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.227 --rc genhtml_branch_coverage=1 00:22:17.227 --rc genhtml_function_coverage=1 00:22:17.227 --rc genhtml_legend=1 00:22:17.227 --rc geninfo_all_blocks=1 00:22:17.227 --rc geninfo_unexecuted_blocks=1 00:22:17.227 00:22:17.227 ' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:17.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.227 --rc genhtml_branch_coverage=1 00:22:17.227 --rc genhtml_function_coverage=1 00:22:17.227 --rc genhtml_legend=1 00:22:17.227 --rc geninfo_all_blocks=1 00:22:17.227 --rc geninfo_unexecuted_blocks=1 00:22:17.227 00:22:17.227 ' 00:22:17.227 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:17.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.227 --rc genhtml_branch_coverage=1 00:22:17.227 --rc genhtml_function_coverage=1 00:22:17.227 --rc genhtml_legend=1 00:22:17.227 --rc geninfo_all_blocks=1 00:22:17.227 --rc geninfo_unexecuted_blocks=1 00:22:17.228 00:22:17.228 ' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.228 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.228 16:09:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:23.808 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:23.808 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:23.808 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.808 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:23.808 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # rdma_device_init 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@526 -- # allocate_nic_ips 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:23.809 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:23.809 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:23.809 altname enp217s0f0np0 00:22:23.809 altname ens818f0np0 00:22:23.809 inet 192.168.100.8/24 scope global mlx_0_0 00:22:23.809 valid_lft forever preferred_lft forever 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:23.809 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:23.809 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:23.809 altname enp217s0f1np1 00:22:23.809 altname ens818f1np1 00:22:23.809 inet 192.168.100.9/24 scope global mlx_0_1 00:22:23.809 valid_lft forever preferred_lft forever 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:22:23.809 192.168.100.9' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:22:23.809 192.168.100.9' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # head -n 1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:22:23.809 192.168.100.9' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # tail -n +2 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # head -n 1 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:23.809 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=2872635 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 2872635 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2872635 ']' 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:23.810 16:09:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.810 [2024-12-15 16:09:52.036448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:23.810 [2024-12-15 16:09:52.036497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.810 [2024-12-15 16:09:52.105626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.810 [2024-12-15 16:09:52.146575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.810 [2024-12-15 16:09:52.146614] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.810 [2024-12-15 16:09:52.146624] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.810 [2024-12-15 16:09:52.146632] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.810 [2024-12-15 16:09:52.146639] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.810 [2024-12-15 16:09:52.146681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.810 [2024-12-15 16:09:52.146777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.810 [2024-12-15 16:09:52.146797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.810 [2024-12-15 16:09:52.146799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.810 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:23.810 [2024-12-15 16:09:52.315970] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cade40/0x1cb2330) succeed. 00:22:23.810 [2024-12-15 16:09:52.326299] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1caf480/0x1cf39d0) succeed. 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 Malloc1 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 [2024-12-15 16:09:52.496616] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 Malloc2 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 Malloc3 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.070 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 Malloc4 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.071 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 Malloc5 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 Malloc6 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 Malloc7 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.331 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 Malloc8 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 Malloc9 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 Malloc10 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.332 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.591 Malloc11 00:22:24.591 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.591 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:24.591 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.591 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.592 16:09:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:25.529 16:09:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:25.529 16:09:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:25.529 16:09:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.529 16:09:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:25.529 16:09:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.433 16:09:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:28.370 16:09:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:28.371 16:09:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:28.371 16:09:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.371 16:09:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:28.371 16:09:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:30.907 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.908 16:09:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:31.476 16:09:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:31.476 16:09:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:31.476 16:09:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:31.476 16:09:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:31.476 16:09:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:33.382 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:33.382 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:33.382 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:22:33.641 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:33.641 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:33.641 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:33.641 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.641 16:10:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:34.578 16:10:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:34.578 16:10:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.578 16:10:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:34.578 16:10:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:34.578 16:10:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:36.483 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.484 16:10:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:37.417 16:10:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:37.418 16:10:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:37.418 16:10:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.418 16:10:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:37.418 16:10:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.951 16:10:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:22:40.519 16:10:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:40.519 16:10:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:40.519 16:10:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:40.519 16:10:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:40.519 16:10:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:42.430 16:10:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:42.430 16:10:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:42.430 16:10:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:22:42.688 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:42.688 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.688 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:42.688 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.688 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:22:43.623 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:43.623 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.624 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:43.624 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:43.624 16:10:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:45.529 16:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:45.529 16:10:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.529 16:10:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:22:46.465 16:10:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:46.465 16:10:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:46.465 16:10:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:46.465 16:10:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:46.465 16:10:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.996 16:10:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:49.564 16:10:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:49.564 16:10:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:49.564 16:10:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:49.564 16:10:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:49.564 16:10:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:51.467 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:51.467 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:51.467 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:22:51.727 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:51.727 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.727 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:51.727 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.727 16:10:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:52.769 16:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:52.769 16:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.769 16:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:52.769 16:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:52.769 16:10:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.674 16:10:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:55.611 16:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:55.611 16:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.611 16:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:55.611 16:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:55.611 16:10:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:57.516 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:57.516 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:57.516 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:22:57.775 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:57.775 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:57.775 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:57.775 16:10:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:57.775 [global] 00:22:57.775 thread=1 00:22:57.775 invalidate=1 00:22:57.775 rw=read 00:22:57.775 time_based=1 00:22:57.775 runtime=10 00:22:57.775 ioengine=libaio 00:22:57.775 direct=1 00:22:57.775 bs=262144 00:22:57.775 iodepth=64 00:22:57.775 norandommap=1 00:22:57.775 numjobs=1 00:22:57.775 00:22:57.775 [job0] 00:22:57.775 filename=/dev/nvme0n1 00:22:57.775 [job1] 00:22:57.775 filename=/dev/nvme10n1 00:22:57.775 [job2] 00:22:57.775 filename=/dev/nvme1n1 00:22:57.775 [job3] 00:22:57.775 filename=/dev/nvme2n1 00:22:57.775 [job4] 00:22:57.775 filename=/dev/nvme3n1 00:22:57.775 [job5] 00:22:57.775 filename=/dev/nvme4n1 00:22:57.775 [job6] 00:22:57.775 filename=/dev/nvme5n1 00:22:57.775 [job7] 00:22:57.775 filename=/dev/nvme6n1 00:22:57.775 [job8] 00:22:57.775 filename=/dev/nvme7n1 00:22:57.775 [job9] 00:22:57.775 filename=/dev/nvme8n1 00:22:57.775 [job10] 00:22:57.775 filename=/dev/nvme9n1 00:22:58.051 Could not set queue depth (nvme0n1) 00:22:58.051 Could not set queue depth (nvme10n1) 00:22:58.051 Could not set queue depth (nvme1n1) 00:22:58.051 Could not set queue depth (nvme2n1) 00:22:58.051 Could not set queue depth (nvme3n1) 00:22:58.051 Could not set queue depth (nvme4n1) 00:22:58.051 Could not set queue depth (nvme5n1) 00:22:58.051 Could not set queue depth (nvme6n1) 00:22:58.051 Could not set queue depth (nvme7n1) 00:22:58.051 Could not set queue depth (nvme8n1) 00:22:58.051 Could not set queue depth (nvme9n1) 00:22:58.312 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:58.312 fio-3.35 00:22:58.312 Starting 11 threads 00:23:10.510 00:23:10.510 job0: (groupid=0, jobs=1): err= 0: pid=2878878: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1313, BW=328MiB/s (344MB/s)(3297MiB/10041msec) 00:23:10.510 slat (usec): min=12, max=16121, avg=754.31, stdev=1922.29 00:23:10.510 clat (usec): min=10677, max=83968, avg=47920.70, stdev=3373.65 00:23:10.510 lat (usec): min=10881, max=84025, avg=48675.01, stdev=3765.74 00:23:10.510 clat percentiles (usec): 00:23:10.510 | 1.00th=[44303], 5.00th=[45876], 10.00th=[45876], 20.00th=[46400], 00:23:10.510 | 30.00th=[46924], 40.00th=[46924], 50.00th=[47449], 60.00th=[47973], 00:23:10.510 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[52167], 00:23:10.510 | 99.00th=[56886], 99.50th=[60556], 99.90th=[79168], 99.95th=[83362], 00:23:10.510 | 99.99th=[84411] 00:23:10.510 bw ( KiB/s): min=328704, max=346112, per=8.31%, avg=336025.60, stdev=4506.89, samples=20 00:23:10.510 iops : min= 1284, max= 1352, avg=1312.60, stdev=17.61, samples=20 00:23:10.510 lat (msec) : 20=0.35%, 50=87.47%, 100=12.18% 00:23:10.510 cpu : usr=0.55%, sys=5.81%, ctx=2482, majf=0, minf=4098 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=13189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job1: (groupid=0, jobs=1): err= 0: pid=2878879: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1272, BW=318MiB/s (334MB/s)(3195MiB/10042msec) 00:23:10.510 slat (usec): min=15, max=16612, avg=779.65, stdev=1912.95 00:23:10.510 clat (usec): min=13153, max=96419, avg=49458.65, stdev=5106.73 00:23:10.510 lat (usec): min=13407, max=96441, avg=50238.30, stdev=5373.32 00:23:10.510 clat percentiles (usec): 00:23:10.510 | 1.00th=[44827], 5.00th=[45876], 10.00th=[46400], 20.00th=[46924], 00:23:10.510 | 30.00th=[46924], 40.00th=[47449], 50.00th=[47973], 60.00th=[48497], 00:23:10.510 | 70.00th=[49021], 80.00th=[50594], 90.00th=[54264], 95.00th=[62653], 00:23:10.510 | 99.00th=[66323], 99.50th=[69731], 99.90th=[86508], 99.95th=[94897], 00:23:10.510 | 99.99th=[94897] 00:23:10.510 bw ( KiB/s): min=248832, max=340480, per=8.05%, avg=325504.00, stdev=22083.81, samples=20 00:23:10.510 iops : min= 972, max= 1330, avg=1271.50, stdev=86.26, samples=20 00:23:10.510 lat (msec) : 20=0.23%, 50=75.98%, 100=23.78% 00:23:10.510 cpu : usr=0.40%, sys=4.62%, ctx=2436, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=12778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job2: (groupid=0, jobs=1): err= 0: pid=2878882: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1309, BW=327MiB/s (343MB/s)(3288MiB/10041msec) 00:23:10.510 slat (usec): min=14, max=19915, avg=756.16, stdev=1910.70 00:23:10.510 clat (msec): min=11, max=101, avg=48.05, stdev= 3.52 00:23:10.510 lat (msec): min=12, max=101, avg=48.81, stdev= 3.89 00:23:10.510 clat percentiles (msec): 00:23:10.510 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 47], 00:23:10.510 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 48], 00:23:10.510 | 70.00th=[ 49], 80.00th=[ 50], 90.00th=[ 51], 95.00th=[ 53], 00:23:10.510 | 99.00th=[ 59], 99.50th=[ 65], 99.90th=[ 82], 99.95th=[ 86], 00:23:10.510 | 99.99th=[ 102] 00:23:10.510 bw ( KiB/s): min=320512, max=344576, per=8.29%, avg=335052.80, stdev=6705.36, samples=20 00:23:10.510 iops : min= 1252, max= 1346, avg=1308.80, stdev=26.19, samples=20 00:23:10.510 lat (msec) : 20=0.28%, 50=86.48%, 100=13.22%, 250=0.02% 00:23:10.510 cpu : usr=0.57%, sys=6.07%, ctx=2457, majf=0, minf=3659 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=13151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job3: (groupid=0, jobs=1): err= 0: pid=2878883: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1274, BW=319MiB/s (334MB/s)(3198MiB/10039msec) 00:23:10.510 slat (usec): min=13, max=17311, avg=777.63, stdev=1900.86 00:23:10.510 clat (usec): min=13815, max=90698, avg=49407.75, stdev=4813.51 00:23:10.510 lat (usec): min=14182, max=90733, avg=50185.38, stdev=5112.94 00:23:10.510 clat percentiles (usec): 00:23:10.510 | 1.00th=[45351], 5.00th=[45876], 10.00th=[46400], 20.00th=[46924], 00:23:10.510 | 30.00th=[47449], 40.00th=[47449], 50.00th=[47973], 60.00th=[48497], 00:23:10.510 | 70.00th=[49021], 80.00th=[50594], 90.00th=[53740], 95.00th=[62129], 00:23:10.510 | 99.00th=[65799], 99.50th=[67634], 99.90th=[82314], 99.95th=[84411], 00:23:10.510 | 99.99th=[90702] 00:23:10.510 bw ( KiB/s): min=247278, max=346624, per=8.06%, avg=325835.90, stdev=22798.78, samples=20 00:23:10.510 iops : min= 965, max= 1354, avg=1272.75, stdev=89.23, samples=20 00:23:10.510 lat (msec) : 20=0.16%, 50=77.34%, 100=22.50% 00:23:10.510 cpu : usr=0.73%, sys=5.75%, ctx=2388, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=12790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job4: (groupid=0, jobs=1): err= 0: pid=2878884: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=999, BW=250MiB/s (262MB/s)(2512MiB/10050msec) 00:23:10.510 slat (usec): min=15, max=25850, avg=991.45, stdev=2629.08 00:23:10.510 clat (msec): min=13, max=114, avg=62.96, stdev= 7.01 00:23:10.510 lat (msec): min=13, max=115, avg=63.95, stdev= 7.45 00:23:10.510 clat percentiles (msec): 00:23:10.510 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 63], 00:23:10.510 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 64], 60.00th=[ 65], 00:23:10.510 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 68], 95.00th=[ 70], 00:23:10.510 | 99.00th=[ 80], 99.50th=[ 85], 99.90th=[ 104], 99.95th=[ 109], 00:23:10.510 | 99.99th=[ 109] 00:23:10.510 bw ( KiB/s): min=237568, max=328192, per=6.32%, avg=255564.80, stdev=21705.34, samples=20 00:23:10.510 iops : min= 928, max= 1282, avg=998.30, stdev=84.79, samples=20 00:23:10.510 lat (msec) : 20=0.39%, 50=8.56%, 100=90.92%, 250=0.13% 00:23:10.510 cpu : usr=0.48%, sys=4.73%, ctx=1885, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=10046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job5: (groupid=0, jobs=1): err= 0: pid=2878886: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1000, BW=250MiB/s (262MB/s)(2515MiB/10051msec) 00:23:10.510 slat (usec): min=16, max=29128, avg=989.36, stdev=2736.06 00:23:10.510 clat (msec): min=13, max=114, avg=62.87, stdev= 6.84 00:23:10.510 lat (msec): min=13, max=114, avg=63.86, stdev= 7.35 00:23:10.510 clat percentiles (msec): 00:23:10.510 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 63], 00:23:10.510 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 64], 60.00th=[ 65], 00:23:10.510 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 68], 95.00th=[ 70], 00:23:10.510 | 99.00th=[ 77], 99.50th=[ 85], 99.90th=[ 109], 99.95th=[ 112], 00:23:10.510 | 99.99th=[ 115] 00:23:10.510 bw ( KiB/s): min=243712, max=331414, per=6.33%, avg=255981.90, stdev=21931.92, samples=20 00:23:10.510 iops : min= 952, max= 1294, avg=999.90, stdev=85.57, samples=20 00:23:10.510 lat (msec) : 20=0.34%, 50=8.66%, 100=90.87%, 250=0.14% 00:23:10.510 cpu : usr=0.48%, sys=4.84%, ctx=1922, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=10061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job6: (groupid=0, jobs=1): err= 0: pid=2878887: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=4073, BW=1018MiB/s (1068MB/s)(9.96GiB/10014msec) 00:23:10.510 slat (usec): min=11, max=4096, avg=243.75, stdev=523.32 00:23:10.510 clat (usec): min=1836, max=29297, avg=15453.63, stdev=1029.95 00:23:10.510 lat (usec): min=2006, max=29332, avg=15697.38, stdev=1039.34 00:23:10.510 clat percentiles (usec): 00:23:10.510 | 1.00th=[13829], 5.00th=[14222], 10.00th=[14484], 20.00th=[14746], 00:23:10.510 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:23:10.510 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16712], 00:23:10.510 | 99.00th=[17171], 99.50th=[19006], 99.90th=[26608], 99.95th=[28181], 00:23:10.510 | 99.99th=[28967] 00:23:10.510 bw ( KiB/s): min=999424, max=1052672, per=25.80%, avg=1042585.60, stdev=12404.33, samples=20 00:23:10.510 iops : min= 3904, max= 4112, avg=4072.60, stdev=48.45, samples=20 00:23:10.510 lat (msec) : 2=0.01%, 4=0.03%, 10=0.13%, 20=99.41%, 50=0.43% 00:23:10.510 cpu : usr=0.69%, sys=8.85%, ctx=8039, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=40789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job7: (groupid=0, jobs=1): err= 0: pid=2878888: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=1311, BW=328MiB/s (344MB/s)(3292MiB/10038msec) 00:23:10.510 slat (usec): min=16, max=19638, avg=756.42, stdev=1911.80 00:23:10.510 clat (usec): min=12740, max=84088, avg=47986.52, stdev=2906.58 00:23:10.510 lat (usec): min=13001, max=84116, avg=48742.94, stdev=3326.09 00:23:10.510 clat percentiles (usec): 00:23:10.510 | 1.00th=[44303], 5.00th=[45876], 10.00th=[45876], 20.00th=[46400], 00:23:10.510 | 30.00th=[46924], 40.00th=[47449], 50.00th=[47449], 60.00th=[47973], 00:23:10.510 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[52167], 00:23:10.510 | 99.00th=[56886], 99.50th=[58983], 99.90th=[64226], 99.95th=[64750], 00:23:10.510 | 99.99th=[68682] 00:23:10.510 bw ( KiB/s): min=326656, max=344576, per=8.30%, avg=335469.85, stdev=5818.67, samples=20 00:23:10.510 iops : min= 1276, max= 1346, avg=1310.40, stdev=22.75, samples=20 00:23:10.510 lat (msec) : 20=0.24%, 50=86.51%, 100=13.25% 00:23:10.510 cpu : usr=0.59%, sys=6.08%, ctx=2477, majf=0, minf=4097 00:23:10.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:10.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.510 issued rwts: total=13167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.510 job8: (groupid=0, jobs=1): err= 0: pid=2878889: Sun Dec 15 16:10:37 2024 00:23:10.510 read: IOPS=982, BW=246MiB/s (258MB/s)(2469MiB/10050msec) 00:23:10.510 slat (usec): min=15, max=27510, avg=994.98, stdev=3017.75 00:23:10.510 clat (msec): min=13, max=114, avg=64.08, stdev= 6.32 00:23:10.510 lat (msec): min=13, max=114, avg=65.07, stdev= 6.99 00:23:10.510 clat percentiles (msec): 00:23:10.510 | 1.00th=[ 26], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 63], 00:23:10.511 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 65], 00:23:10.511 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 68], 95.00th=[ 70], 00:23:10.511 | 99.00th=[ 83], 99.50th=[ 88], 99.90th=[ 110], 99.95th=[ 114], 00:23:10.511 | 99.99th=[ 114] 00:23:10.511 bw ( KiB/s): min=241152, max=285696, per=6.21%, avg=251161.60, stdev=9224.19, samples=20 00:23:10.511 iops : min= 942, max= 1116, avg=981.10, stdev=36.03, samples=20 00:23:10.511 lat (msec) : 20=0.52%, 50=1.06%, 100=98.27%, 250=0.15% 00:23:10.511 cpu : usr=0.37%, sys=4.87%, ctx=2013, majf=0, minf=4097 00:23:10.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:10.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.511 issued rwts: total=9874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.511 job9: (groupid=0, jobs=1): err= 0: pid=2878890: Sun Dec 15 16:10:37 2024 00:23:10.511 read: IOPS=1274, BW=319MiB/s (334MB/s)(3198MiB/10040msec) 00:23:10.511 slat (usec): min=13, max=19087, avg=777.61, stdev=1961.44 00:23:10.511 clat (usec): min=13790, max=92503, avg=49406.02, stdev=4999.77 00:23:10.511 lat (usec): min=14049, max=92558, avg=50183.63, stdev=5317.59 00:23:10.511 clat percentiles (usec): 00:23:10.511 | 1.00th=[45351], 5.00th=[45876], 10.00th=[46400], 20.00th=[46924], 00:23:10.511 | 30.00th=[46924], 40.00th=[47449], 50.00th=[47973], 60.00th=[48497], 00:23:10.511 | 70.00th=[49021], 80.00th=[50070], 90.00th=[53740], 95.00th=[62653], 00:23:10.511 | 99.00th=[66847], 99.50th=[68682], 99.90th=[84411], 99.95th=[88605], 00:23:10.511 | 99.99th=[92799] 00:23:10.511 bw ( KiB/s): min=249344, max=344576, per=8.06%, avg=325836.80, stdev=22467.53, samples=20 00:23:10.511 iops : min= 974, max= 1346, avg=1272.80, stdev=87.76, samples=20 00:23:10.511 lat (msec) : 20=0.22%, 50=78.38%, 100=21.41% 00:23:10.511 cpu : usr=0.45%, sys=5.96%, ctx=2383, majf=0, minf=4097 00:23:10.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:10.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.511 issued rwts: total=12791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.511 job10: (groupid=0, jobs=1): err= 0: pid=2878891: Sun Dec 15 16:10:37 2024 00:23:10.511 read: IOPS=998, BW=250MiB/s (262MB/s)(2510MiB/10050msec) 00:23:10.511 slat (usec): min=13, max=16587, avg=992.41, stdev=2409.86 00:23:10.511 clat (msec): min=13, max=103, avg=63.00, stdev= 6.81 00:23:10.511 lat (msec): min=13, max=117, avg=63.99, stdev= 7.18 00:23:10.511 clat percentiles (msec): 00:23:10.511 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 63], 00:23:10.511 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 64], 60.00th=[ 65], 00:23:10.511 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 68], 95.00th=[ 70], 00:23:10.511 | 99.00th=[ 77], 99.50th=[ 80], 99.90th=[ 102], 99.95th=[ 104], 00:23:10.511 | 99.99th=[ 104] 00:23:10.511 bw ( KiB/s): min=237056, max=327310, per=6.32%, avg=255392.70, stdev=21894.73, samples=20 00:23:10.511 iops : min= 926, max= 1278, avg=997.60, stdev=85.43, samples=20 00:23:10.511 lat (msec) : 20=0.34%, 50=8.72%, 100=90.81%, 250=0.14% 00:23:10.511 cpu : usr=0.53%, sys=4.89%, ctx=1927, majf=0, minf=4097 00:23:10.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:10.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.511 issued rwts: total=10039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.511 00:23:10.511 Run status group 0 (all jobs): 00:23:10.511 READ: bw=3947MiB/s (4138MB/s), 246MiB/s-1018MiB/s (258MB/s-1068MB/s), io=38.7GiB (41.6GB), run=10014-10051msec 00:23:10.511 00:23:10.511 Disk stats (read/write): 00:23:10.511 nvme0n1: ios=25990/0, merge=0/0, ticks=1223197/0, in_queue=1223197, util=96.89% 00:23:10.511 nvme10n1: ios=25196/0, merge=0/0, ticks=1220175/0, in_queue=1220175, util=97.12% 00:23:10.511 nvme1n1: ios=25937/0, merge=0/0, ticks=1221611/0, in_queue=1221611, util=97.46% 00:23:10.511 nvme2n1: ios=25203/0, merge=0/0, ticks=1223087/0, in_queue=1223087, util=97.61% 00:23:10.511 nvme3n1: ios=19812/0, merge=0/0, ticks=1223756/0, in_queue=1223756, util=97.73% 00:23:10.511 nvme4n1: ios=19808/0, merge=0/0, ticks=1223341/0, in_queue=1223341, util=98.12% 00:23:10.511 nvme5n1: ios=80656/0, merge=0/0, ticks=1214043/0, in_queue=1214043, util=98.29% 00:23:10.511 nvme6n1: ios=25936/0, merge=0/0, ticks=1222357/0, in_queue=1222357, util=98.42% 00:23:10.511 nvme7n1: ios=19425/0, merge=0/0, ticks=1223946/0, in_queue=1223946, util=98.88% 00:23:10.511 nvme8n1: ios=25214/0, merge=0/0, ticks=1222773/0, in_queue=1222773, util=99.09% 00:23:10.511 nvme9n1: ios=19790/0, merge=0/0, ticks=1223382/0, in_queue=1223382, util=99.24% 00:23:10.511 16:10:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:10.511 [global] 00:23:10.511 thread=1 00:23:10.511 invalidate=1 00:23:10.511 rw=randwrite 00:23:10.511 time_based=1 00:23:10.511 runtime=10 00:23:10.511 ioengine=libaio 00:23:10.511 direct=1 00:23:10.511 bs=262144 00:23:10.511 iodepth=64 00:23:10.511 norandommap=1 00:23:10.511 numjobs=1 00:23:10.511 00:23:10.511 [job0] 00:23:10.511 filename=/dev/nvme0n1 00:23:10.511 [job1] 00:23:10.511 filename=/dev/nvme10n1 00:23:10.511 [job2] 00:23:10.511 filename=/dev/nvme1n1 00:23:10.511 [job3] 00:23:10.511 filename=/dev/nvme2n1 00:23:10.511 [job4] 00:23:10.511 filename=/dev/nvme3n1 00:23:10.511 [job5] 00:23:10.511 filename=/dev/nvme4n1 00:23:10.511 [job6] 00:23:10.511 filename=/dev/nvme5n1 00:23:10.511 [job7] 00:23:10.511 filename=/dev/nvme6n1 00:23:10.511 [job8] 00:23:10.511 filename=/dev/nvme7n1 00:23:10.511 [job9] 00:23:10.511 filename=/dev/nvme8n1 00:23:10.511 [job10] 00:23:10.511 filename=/dev/nvme9n1 00:23:10.511 Could not set queue depth (nvme0n1) 00:23:10.511 Could not set queue depth (nvme10n1) 00:23:10.511 Could not set queue depth (nvme1n1) 00:23:10.511 Could not set queue depth (nvme2n1) 00:23:10.511 Could not set queue depth (nvme3n1) 00:23:10.511 Could not set queue depth (nvme4n1) 00:23:10.511 Could not set queue depth (nvme5n1) 00:23:10.511 Could not set queue depth (nvme6n1) 00:23:10.511 Could not set queue depth (nvme7n1) 00:23:10.511 Could not set queue depth (nvme8n1) 00:23:10.511 Could not set queue depth (nvme9n1) 00:23:10.511 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.511 fio-3.35 00:23:10.511 Starting 11 threads 00:23:20.481 00:23:20.481 job0: (groupid=0, jobs=1): err= 0: pid=2880616: Sun Dec 15 16:10:48 2024 00:23:20.481 write: IOPS=837, BW=209MiB/s (220MB/s)(2106MiB/10056msec); 0 zone resets 00:23:20.481 slat (usec): min=26, max=42637, avg=1176.84, stdev=2572.44 00:23:20.481 clat (msec): min=2, max=136, avg=75.22, stdev=10.53 00:23:20.481 lat (msec): min=2, max=147, avg=76.40, stdev=10.81 00:23:20.481 clat percentiles (msec): 00:23:20.481 | 1.00th=[ 52], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:23:20.481 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:23:20.481 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:23:20.481 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 124], 99.95th=[ 127], 00:23:20.481 | 99.99th=[ 138] 00:23:20.481 bw ( KiB/s): min=167936, max=227840, per=6.07%, avg=213990.40, stdev=18100.77, samples=20 00:23:20.481 iops : min= 656, max= 890, avg=835.90, stdev=70.71, samples=20 00:23:20.481 lat (msec) : 4=0.02%, 10=0.13%, 20=0.17%, 50=0.50%, 100=95.04% 00:23:20.481 lat (msec) : 250=4.14% 00:23:20.481 cpu : usr=1.88%, sys=3.73%, ctx=2069, majf=0, minf=1 00:23:20.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:20.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.481 issued rwts: total=0,8422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.481 job1: (groupid=0, jobs=1): err= 0: pid=2880628: Sun Dec 15 16:10:48 2024 00:23:20.481 write: IOPS=2470, BW=618MiB/s (648MB/s)(6183MiB/10012msec); 0 zone resets 00:23:20.481 slat (usec): min=15, max=10903, avg=401.27, stdev=830.63 00:23:20.481 clat (usec): min=7457, max=63431, avg=25499.83, stdev=11355.39 00:23:20.481 lat (usec): min=7507, max=63551, avg=25901.10, stdev=11524.38 00:23:20.481 clat percentiles (usec): 00:23:20.481 | 1.00th=[15926], 5.00th=[16712], 10.00th=[16909], 20.00th=[17433], 00:23:20.481 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:23:20.481 | 70.00th=[35390], 80.00th=[36963], 90.00th=[38536], 95.00th=[52167], 00:23:20.481 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60556], 99.95th=[61604], 00:23:20.481 | 99.99th=[63177] 00:23:20.481 bw ( KiB/s): min=288256, max=921088, per=17.91%, avg=631475.20, stdev=259815.46, samples=20 00:23:20.481 iops : min= 1126, max= 3598, avg=2466.70, stdev=1014.90, samples=20 00:23:20.481 lat (msec) : 10=0.06%, 20=64.25%, 50=30.51%, 100=5.19% 00:23:20.481 cpu : usr=4.01%, sys=5.60%, ctx=5141, majf=0, minf=1 00:23:20.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:20.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.481 issued rwts: total=0,24730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.481 job2: (groupid=0, jobs=1): err= 0: pid=2880629: Sun Dec 15 16:10:48 2024 00:23:20.481 write: IOPS=831, BW=208MiB/s (218MB/s)(2090MiB/10055msec); 0 zone resets 00:23:20.481 slat (usec): min=29, max=18147, avg=1190.90, stdev=2404.37 00:23:20.481 clat (msec): min=19, max=125, avg=75.75, stdev= 9.43 00:23:20.481 lat (msec): min=19, max=125, avg=76.94, stdev= 9.67 00:23:20.481 clat percentiles (msec): 00:23:20.481 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 69], 20.00th=[ 70], 00:23:20.481 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:23:20.481 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:23:20.481 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 123], 00:23:20.481 | 99.99th=[ 126] 00:23:20.481 bw ( KiB/s): min=155959, max=228352, per=6.02%, avg=212444.35, stdev=21989.80, samples=20 00:23:20.481 iops : min= 609, max= 892, avg=829.85, stdev=85.93, samples=20 00:23:20.481 lat (msec) : 20=0.05%, 50=0.24%, 100=95.55%, 250=4.16% 00:23:20.481 cpu : usr=2.03%, sys=3.58%, ctx=2033, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,8361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job3: (groupid=0, jobs=1): err= 0: pid=2880630: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1233, BW=308MiB/s (323MB/s)(3101MiB/10056msec); 0 zone resets 00:23:20.482 slat (usec): min=21, max=14929, avg=789.32, stdev=1567.52 00:23:20.482 clat (msec): min=11, max=124, avg=51.09, stdev=13.55 00:23:20.482 lat (msec): min=11, max=133, avg=51.88, stdev=13.78 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:23:20.482 | 30.00th=[ 41], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 55], 00:23:20.482 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 71], 00:23:20.482 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 120], 00:23:20.482 | 99.99th=[ 120] 00:23:20.482 bw ( KiB/s): min=157184, max=439808, per=8.96%, avg=315878.40, stdev=69620.01, samples=20 00:23:20.482 iops : min= 614, max= 1718, avg=1233.90, stdev=271.95, samples=20 00:23:20.482 lat (msec) : 20=0.44%, 50=31.20%, 100=65.96%, 250=2.40% 00:23:20.482 cpu : usr=2.76%, sys=4.69%, ctx=3071, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,12402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job4: (groupid=0, jobs=1): err= 0: pid=2880631: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1432, BW=358MiB/s (376MB/s)(3592MiB/10028msec); 0 zone resets 00:23:20.482 slat (usec): min=21, max=10730, avg=687.49, stdev=1244.45 00:23:20.482 clat (usec): min=15219, max=64419, avg=43967.92, stdev=9132.11 00:23:20.482 lat (usec): min=15275, max=64491, avg=44655.41, stdev=9246.62 00:23:20.482 clat percentiles (usec): 00:23:20.482 | 1.00th=[33424], 5.00th=[34866], 10.00th=[35914], 20.00th=[36439], 00:23:20.482 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38536], 60.00th=[39584], 00:23:20.482 | 70.00th=[53740], 80.00th=[55313], 90.00th=[56886], 95.00th=[57934], 00:23:20.482 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61604], 99.95th=[62129], 00:23:20.482 | 99.99th=[64226] 00:23:20.482 bw ( KiB/s): min=286720, max=444928, per=10.39%, avg=366220.70, stdev=70540.74, samples=20 00:23:20.482 iops : min= 1120, max= 1738, avg=1430.50, stdev=275.54, samples=20 00:23:20.482 lat (msec) : 20=0.06%, 50=63.60%, 100=36.34% 00:23:20.482 cpu : usr=3.03%, sys=5.30%, ctx=3460, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,14367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job5: (groupid=0, jobs=1): err= 0: pid=2880632: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1359, BW=340MiB/s (356MB/s)(3409MiB/10028msec); 0 zone resets 00:23:20.482 slat (usec): min=21, max=7067, avg=729.10, stdev=1302.87 00:23:20.482 clat (usec): min=4635, max=62900, avg=46319.37, stdev=9368.03 00:23:20.482 lat (usec): min=4688, max=64106, avg=47048.47, stdev=9485.31 00:23:20.482 clat percentiles (usec): 00:23:20.482 | 1.00th=[33817], 5.00th=[35390], 10.00th=[35914], 20.00th=[36963], 00:23:20.482 | 30.00th=[38011], 40.00th=[38536], 50.00th=[42730], 60.00th=[53740], 00:23:20.482 | 70.00th=[55313], 80.00th=[56361], 90.00th=[57410], 95.00th=[57934], 00:23:20.482 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61604], 99.95th=[61604], 00:23:20.482 | 99.99th=[62129] 00:23:20.482 bw ( KiB/s): min=286208, max=434176, per=9.85%, avg=347494.40, stdev=67276.08, samples=20 00:23:20.482 iops : min= 1118, max= 1696, avg=1357.40, stdev=262.80, samples=20 00:23:20.482 lat (msec) : 10=0.08%, 20=0.10%, 50=51.02%, 100=48.80% 00:23:20.482 cpu : usr=3.14%, sys=4.90%, ctx=3368, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,13637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job6: (groupid=0, jobs=1): err= 0: pid=2880633: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1264, BW=316MiB/s (331MB/s)(3178MiB/10052msec); 0 zone resets 00:23:20.482 slat (usec): min=22, max=23556, avg=772.13, stdev=1480.44 00:23:20.482 clat (msec): min=7, max=122, avg=49.82, stdev=12.79 00:23:20.482 lat (msec): min=7, max=123, avg=50.59, stdev=12.95 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:23:20.482 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 54], 60.00th=[ 55], 00:23:20.482 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 72], 00:23:20.482 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 113], 99.95th=[ 121], 00:23:20.482 | 99.99th=[ 124] 00:23:20.482 bw ( KiB/s): min=182784, max=434176, per=9.18%, avg=323831.05, stdev=74679.60, samples=20 00:23:20.482 iops : min= 714, max= 1696, avg=1264.95, stdev=291.70, samples=20 00:23:20.482 lat (msec) : 10=0.18%, 20=0.18%, 50=40.74%, 100=58.63%, 250=0.26% 00:23:20.482 cpu : usr=2.67%, sys=5.17%, ctx=3188, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,12711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job7: (groupid=0, jobs=1): err= 0: pid=2880634: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=829, BW=207MiB/s (218MB/s)(2086MiB/10054msec); 0 zone resets 00:23:20.482 slat (usec): min=32, max=22742, avg=1193.59, stdev=2406.82 00:23:20.482 clat (msec): min=26, max=124, avg=75.91, stdev= 9.35 00:23:20.482 lat (msec): min=26, max=130, avg=77.10, stdev= 9.59 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 68], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 70], 00:23:20.482 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:23:20.482 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 97], 00:23:20.482 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 122], 99.95th=[ 122], 00:23:20.482 | 99.99th=[ 126] 00:23:20.482 bw ( KiB/s): min=156985, max=227328, per=6.01%, avg=211983.65, stdev=21934.79, samples=20 00:23:20.482 iops : min= 613, max= 888, avg=828.05, stdev=85.71, samples=20 00:23:20.482 lat (msec) : 50=0.24%, 100=95.53%, 250=4.23% 00:23:20.482 cpu : usr=1.92%, sys=3.76%, ctx=2042, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,8343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job8: (groupid=0, jobs=1): err= 0: pid=2880635: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1074, BW=269MiB/s (282MB/s)(2701MiB/10056msec); 0 zone resets 00:23:20.482 slat (usec): min=24, max=19089, avg=909.51, stdev=1848.16 00:23:20.482 clat (msec): min=4, max=123, avg=58.65, stdev=12.56 00:23:20.482 lat (msec): min=4, max=124, avg=59.56, stdev=12.79 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:23:20.482 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 56], 00:23:20.482 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 72], 95.00th=[ 91], 00:23:20.482 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 122], 00:23:20.482 | 99.99th=[ 125] 00:23:20.482 bw ( KiB/s): min=160768, max=301056, per=7.80%, avg=274918.40, stdev=44829.28, samples=20 00:23:20.482 iops : min= 628, max= 1176, avg=1073.90, stdev=175.11, samples=20 00:23:20.482 lat (msec) : 10=0.02%, 20=0.07%, 50=1.46%, 100=95.28%, 250=3.17% 00:23:20.482 cpu : usr=2.56%, sys=4.35%, ctx=2664, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,10802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job9: (groupid=0, jobs=1): err= 0: pid=2880636: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=1627, BW=407MiB/s (427MB/s)(4092MiB/10056msec); 0 zone resets 00:23:20.482 slat (usec): min=16, max=10713, avg=607.82, stdev=1284.14 00:23:20.482 clat (msec): min=4, max=122, avg=38.70, stdev=16.71 00:23:20.482 lat (msec): min=4, max=122, avg=39.31, stdev=16.98 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:23:20.482 | 30.00th=[ 20], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 52], 00:23:20.482 | 70.00th=[ 54], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 59], 00:23:20.482 | 99.00th=[ 73], 99.50th=[ 75], 99.90th=[ 105], 99.95th=[ 112], 00:23:20.482 | 99.99th=[ 123] 00:23:20.482 bw ( KiB/s): min=227840, max=876544, per=11.84%, avg=417443.00, stdev=206873.09, samples=20 00:23:20.482 iops : min= 890, max= 3424, avg=1630.60, stdev=808.01, samples=20 00:23:20.482 lat (msec) : 10=0.11%, 20=31.96%, 50=25.42%, 100=42.40%, 250=0.11% 00:23:20.482 cpu : usr=3.39%, sys=4.79%, ctx=3694, majf=0, minf=1 00:23:20.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:20.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.482 issued rwts: total=0,16366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.482 job10: (groupid=0, jobs=1): err= 0: pid=2880637: Sun Dec 15 16:10:48 2024 00:23:20.482 write: IOPS=833, BW=208MiB/s (218MB/s)(2094MiB/10056msec); 0 zone resets 00:23:20.482 slat (usec): min=28, max=17668, avg=1188.75, stdev=2377.92 00:23:20.482 clat (msec): min=12, max=125, avg=75.61, stdev= 9.58 00:23:20.482 lat (msec): min=13, max=125, avg=76.80, stdev= 9.83 00:23:20.482 clat percentiles (msec): 00:23:20.482 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 70], 00:23:20.482 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:23:20.482 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:23:20.482 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 120], 00:23:20.482 | 99.99th=[ 126] 00:23:20.483 bw ( KiB/s): min=159232, max=227328, per=6.04%, avg=212838.40, stdev=21776.94, samples=20 00:23:20.483 iops : min= 622, max= 888, avg=831.40, stdev=85.07, samples=20 00:23:20.483 lat (msec) : 20=0.10%, 50=0.29%, 100=95.28%, 250=4.33% 00:23:20.483 cpu : usr=2.07%, sys=3.58%, ctx=2017, majf=0, minf=1 00:23:20.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.483 issued rwts: total=0,8377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.483 00:23:20.483 Run status group 0 (all jobs): 00:23:20.483 WRITE: bw=3444MiB/s (3611MB/s), 207MiB/s-618MiB/s (218MB/s-648MB/s), io=33.8GiB (36.3GB), run=10012-10056msec 00:23:20.483 00:23:20.483 Disk stats (read/write): 00:23:20.483 nvme0n1: ios=49/16522, merge=0/0, ticks=21/1214858, in_queue=1214879, util=96.79% 00:23:20.483 nvme10n1: ios=0/48479, merge=0/0, ticks=0/1226395, in_queue=1226395, util=96.94% 00:23:20.483 nvme1n1: ios=0/16375, merge=0/0, ticks=0/1213072, in_queue=1213072, util=97.25% 00:23:20.483 nvme2n1: ios=0/24468, merge=0/0, ticks=0/1217631, in_queue=1217631, util=97.44% 00:23:20.483 nvme3n1: ios=0/28192, merge=0/0, ticks=0/1219312, in_queue=1219312, util=97.53% 00:23:20.483 nvme4n1: ios=0/26736, merge=0/0, ticks=0/1218685, in_queue=1218685, util=97.93% 00:23:20.483 nvme5n1: ios=0/25102, merge=0/0, ticks=0/1217370, in_queue=1217370, util=98.09% 00:23:20.483 nvme6n1: ios=0/16356, merge=0/0, ticks=0/1213639, in_queue=1213639, util=98.22% 00:23:20.483 nvme7n1: ios=0/21257, merge=0/0, ticks=0/1215890, in_queue=1215890, util=98.67% 00:23:20.483 nvme8n1: ios=0/32388, merge=0/0, ticks=0/1218206, in_queue=1218206, util=98.89% 00:23:20.483 nvme9n1: ios=0/16418, merge=0/0, ticks=0/1216018, in_queue=1216018, util=99.03% 00:23:20.483 16:10:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:20.483 16:10:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:20.483 16:10:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.483 16:10:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:20.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.741 16:10:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:22.114 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.114 16:10:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:23.047 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.047 16:10:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:23.980 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.980 16:10:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:24.914 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.914 16:10:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:25.847 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.847 16:10:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:26.779 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:26.779 16:10:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:27.712 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:27.712 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.970 16:10:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:28.904 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:28.904 16:10:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:29.836 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:29.836 16:10:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:30.766 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:23:30.766 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:30.767 rmmod nvme_rdma 00:23:30.767 rmmod nvme_fabrics 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 2872635 ']' 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 2872635 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2872635 ']' 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2872635 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.767 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2872635 00:23:31.025 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.025 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.025 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2872635' 00:23:31.025 killing process with pid 2872635 00:23:31.025 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2872635 00:23:31.025 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2872635 00:23:31.283 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:31.283 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:23:31.542 00:23:31.542 real 1m14.425s 00:23:31.542 user 4m52.333s 00:23:31.542 sys 0m19.762s 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.542 ************************************ 00:23:31.542 END TEST nvmf_multiconnection 00:23:31.542 ************************************ 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.542 ************************************ 00:23:31.542 START TEST nvmf_initiator_timeout 00:23:31.542 ************************************ 00:23:31.542 16:10:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:31.542 * Looking for test storage... 00:23:31.542 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:31.542 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:31.542 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:23:31.542 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:31.802 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:31.802 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.802 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.802 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.803 --rc genhtml_branch_coverage=1 00:23:31.803 --rc genhtml_function_coverage=1 00:23:31.803 --rc genhtml_legend=1 00:23:31.803 --rc geninfo_all_blocks=1 00:23:31.803 --rc geninfo_unexecuted_blocks=1 00:23:31.803 00:23:31.803 ' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.803 --rc genhtml_branch_coverage=1 00:23:31.803 --rc genhtml_function_coverage=1 00:23:31.803 --rc genhtml_legend=1 00:23:31.803 --rc geninfo_all_blocks=1 00:23:31.803 --rc geninfo_unexecuted_blocks=1 00:23:31.803 00:23:31.803 ' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.803 --rc genhtml_branch_coverage=1 00:23:31.803 --rc genhtml_function_coverage=1 00:23:31.803 --rc genhtml_legend=1 00:23:31.803 --rc geninfo_all_blocks=1 00:23:31.803 --rc geninfo_unexecuted_blocks=1 00:23:31.803 00:23:31.803 ' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:31.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.803 --rc genhtml_branch_coverage=1 00:23:31.803 --rc genhtml_function_coverage=1 00:23:31.803 --rc genhtml_legend=1 00:23:31.803 --rc geninfo_all_blocks=1 00:23:31.803 --rc geninfo_unexecuted_blocks=1 00:23:31.803 00:23:31.803 ' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.803 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:31.803 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.804 16:11:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:38.426 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:38.426 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:38.427 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:38.427 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:38.427 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # rdma_device_init 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@526 -- # allocate_nic_ips 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:38.427 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.427 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:38.427 altname enp217s0f0np0 00:23:38.427 altname ens818f0np0 00:23:38.427 inet 192.168.100.8/24 scope global mlx_0_0 00:23:38.427 valid_lft forever preferred_lft forever 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:38.427 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.427 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:38.427 altname enp217s0f1np1 00:23:38.427 altname ens818f1np1 00:23:38.427 inet 192.168.100.9/24 scope global mlx_0_1 00:23:38.427 valid_lft forever preferred_lft forever 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:38.427 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:23:38.428 192.168.100.9' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # head -n 1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:23:38.428 192.168.100.9' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:23:38.428 192.168.100.9' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # tail -n +2 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # head -n 1 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=2887361 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 2887361 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2887361 ']' 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.428 16:11:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:38.428 [2024-12-15 16:11:06.804479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:38.428 [2024-12-15 16:11:06.804528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.428 [2024-12-15 16:11:06.873550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.428 [2024-12-15 16:11:06.913217] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.428 [2024-12-15 16:11:06.913255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.428 [2024-12-15 16:11:06.913264] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.428 [2024-12-15 16:11:06.913273] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.428 [2024-12-15 16:11:06.913280] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.428 [2024-12-15 16:11:06.913329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.428 [2024-12-15 16:11:06.913421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.428 [2024-12-15 16:11:06.913508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.428 [2024-12-15 16:11:06.913510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.688 Malloc0 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.688 Delay0 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.688 [2024-12-15 16:11:07.125202] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1395870/0x1312940) succeed. 00:23:38.688 [2024-12-15 16:11:07.135935] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1395ea0/0x1353fe0) succeed. 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.688 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.947 [2024-12-15 16:11:07.279462] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.947 16:11:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:39.880 16:11:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:39.880 16:11:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.880 16:11:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.880 16:11:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:39.880 16:11:08 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2887930 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:41.779 16:11:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:41.779 [global] 00:23:41.779 thread=1 00:23:41.779 invalidate=1 00:23:41.779 rw=write 00:23:41.779 time_based=1 00:23:41.779 runtime=60 00:23:41.779 ioengine=libaio 00:23:41.779 direct=1 00:23:41.779 bs=4096 00:23:41.779 iodepth=1 00:23:41.779 norandommap=0 00:23:41.779 numjobs=1 00:23:41.779 00:23:41.779 verify_dump=1 00:23:41.779 verify_backlog=512 00:23:41.779 verify_state_save=0 00:23:41.779 do_verify=1 00:23:41.779 verify=crc32c-intel 00:23:41.779 [job0] 00:23:41.779 filename=/dev/nvme0n1 00:23:42.047 Could not set queue depth (nvme0n1) 00:23:42.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.304 fio-3.35 00:23:42.304 Starting 1 thread 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.833 true 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.833 true 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.833 true 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.833 true 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.833 16:11:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.116 true 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.116 true 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.116 true 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.116 true 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:48.116 16:11:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2887930 00:24:44.327 00:24:44.327 job0: (groupid=0, jobs=1): err= 0: pid=2888070: Sun Dec 15 16:12:10 2024 00:24:44.327 read: IOPS=1262, BW=5052KiB/s (5173kB/s)(296MiB/60000msec) 00:24:44.327 slat (usec): min=5, max=7864, avg= 9.19, stdev=39.28 00:24:44.327 clat (usec): min=37, max=42559k, avg=664.61, stdev=154605.20 00:24:44.327 lat (usec): min=90, max=42559k, avg=673.79, stdev=154605.20 00:24:44.327 clat percentiles (usec): 00:24:44.327 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:24:44.327 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:24:44.327 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 115], 00:24:44.327 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 141], 00:24:44.327 | 99.99th=[ 260] 00:24:44.327 write: IOPS=1271, BW=5085KiB/s (5208kB/s)(298MiB/60000msec); 0 zone resets 00:24:44.327 slat (usec): min=6, max=1030, avg=11.68, stdev= 4.46 00:24:44.327 clat (usec): min=72, max=313, avg=100.65, stdev= 6.70 00:24:44.327 lat (usec): min=88, max=1139, avg=112.33, stdev= 8.34 00:24:44.327 clat percentiles (usec): 00:24:44.327 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:24:44.327 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102], 00:24:44.327 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 112], 00:24:44.327 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 141], 00:24:44.327 | 99.99th=[ 273] 00:24:44.327 bw ( KiB/s): min= 4096, max=20480, per=100.00%, avg=16969.14, stdev=2601.81, samples=35 00:24:44.327 iops : min= 1024, max= 5120, avg=4242.29, stdev=650.45, samples=35 00:24:44.327 lat (usec) : 50=0.01%, 100=40.31%, 250=59.68%, 500=0.01% 00:24:44.327 lat (msec) : >=2000=0.01% 00:24:44.327 cpu : usr=1.91%, sys=3.31%, ctx=152066, majf=0, minf=141 00:24:44.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.327 issued rwts: total=75776,76282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.327 00:24:44.327 Run status group 0 (all jobs): 00:24:44.327 READ: bw=5052KiB/s (5173kB/s), 5052KiB/s-5052KiB/s (5173kB/s-5173kB/s), io=296MiB (310MB), run=60000-60000msec 00:24:44.327 WRITE: bw=5085KiB/s (5208kB/s), 5085KiB/s-5085KiB/s (5208kB/s-5208kB/s), io=298MiB (312MB), run=60000-60000msec 00:24:44.327 00:24:44.327 Disk stats (read/write): 00:24:44.327 nvme0n1: ios=75705/75776, merge=0/0, ticks=7262/6945, in_queue=14207, util=99.54% 00:24:44.327 16:12:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:44.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:44.327 nvmf hotplug test: fio successful as expected 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.327 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:44.328 rmmod nvme_rdma 00:24:44.328 rmmod nvme_fabrics 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 2887361 ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 2887361 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2887361 ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2887361 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2887361 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2887361' 00:24:44.328 killing process with pid 2887361 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2887361 00:24:44.328 16:12:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2887361 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:24:44.328 00:24:44.328 real 1m12.252s 00:24:44.328 user 4m31.585s 00:24:44.328 sys 0m7.736s 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:44.328 ************************************ 00:24:44.328 END TEST nvmf_initiator_timeout 00:24:44.328 ************************************ 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:44.328 ************************************ 00:24:44.328 START TEST nvmf_srq_overwhelm 00:24:44.328 ************************************ 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:44.328 * Looking for test storage... 00:24:44.328 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lcov --version 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.328 --rc genhtml_branch_coverage=1 00:24:44.328 --rc genhtml_function_coverage=1 00:24:44.328 --rc genhtml_legend=1 00:24:44.328 --rc geninfo_all_blocks=1 00:24:44.328 --rc geninfo_unexecuted_blocks=1 00:24:44.328 00:24:44.328 ' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.328 --rc genhtml_branch_coverage=1 00:24:44.328 --rc genhtml_function_coverage=1 00:24:44.328 --rc genhtml_legend=1 00:24:44.328 --rc geninfo_all_blocks=1 00:24:44.328 --rc geninfo_unexecuted_blocks=1 00:24:44.328 00:24:44.328 ' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.328 --rc genhtml_branch_coverage=1 00:24:44.328 --rc genhtml_function_coverage=1 00:24:44.328 --rc genhtml_legend=1 00:24:44.328 --rc geninfo_all_blocks=1 00:24:44.328 --rc geninfo_unexecuted_blocks=1 00:24:44.328 00:24:44.328 ' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.328 --rc genhtml_branch_coverage=1 00:24:44.328 --rc genhtml_function_coverage=1 00:24:44.328 --rc genhtml_legend=1 00:24:44.328 --rc geninfo_all_blocks=1 00:24:44.328 --rc geninfo_unexecuted_blocks=1 00:24:44.328 00:24:44.328 ' 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.328 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.329 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.329 16:12:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:50.901 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.901 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.901 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:50.902 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:50.902 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:50.902 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:50.902 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # is_hw=yes 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # rdma_device_init 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@526 -- # allocate_nic_ips 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:50.902 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:50.903 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:50.903 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:50.903 altname enp217s0f0np0 00:24:50.903 altname ens818f0np0 00:24:50.903 inet 192.168.100.8/24 scope global mlx_0_0 00:24:50.903 valid_lft forever preferred_lft forever 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:50.903 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:50.903 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:50.903 altname enp217s0f1np1 00:24:50.903 altname ens818f1np1 00:24:50.903 inet 192.168.100.9/24 scope global mlx_0_1 00:24:50.903 valid_lft forever preferred_lft forever 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # return 0 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:24:50.903 192.168.100.9' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:24:50.903 192.168.100.9' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # head -n 1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:24:50.903 192.168.100.9' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # tail -n +2 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # head -n 1 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # nvmfpid=2902234 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # waitforlisten 2902234 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 2902234 ']' 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.903 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:50.903 [2024-12-15 16:12:19.319765] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:50.903 [2024-12-15 16:12:19.319851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.903 [2024-12-15 16:12:19.389925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.903 [2024-12-15 16:12:19.429930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.903 [2024-12-15 16:12:19.429970] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.903 [2024-12-15 16:12:19.429979] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.903 [2024-12-15 16:12:19.429987] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.903 [2024-12-15 16:12:19.429994] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.903 [2024-12-15 16:12:19.430046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.903 [2024-12-15 16:12:19.430140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.903 [2024-12-15 16:12:19.430229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.903 [2024-12-15 16:12:19.430231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 [2024-12-15 16:12:19.613092] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x129ee40/0x12a3330) succeed. 00:24:51.163 [2024-12-15 16:12:19.623411] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12a0480/0x12e49d0) succeed. 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 Malloc0 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:51.163 [2024-12-15 16:12:19.722070] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:51.163 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.164 16:12:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:52.164 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 Malloc1 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.423 16:12:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:53.362 Malloc2 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.362 16:12:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.300 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.559 Malloc3 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.559 16:12:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.497 Malloc4 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.497 16:12:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.435 Malloc5 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.435 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:56.436 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.436 16:12:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.707 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:57.643 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:24:57.643 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:57.643 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:57.643 16:12:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:24:57.643 16:12:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:57.643 16:12:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:24:57.643 16:12:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:57.643 16:12:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:24:57.643 [global] 00:24:57.643 thread=1 00:24:57.643 invalidate=1 00:24:57.643 rw=read 00:24:57.643 time_based=1 00:24:57.643 runtime=10 00:24:57.643 ioengine=libaio 00:24:57.643 direct=1 00:24:57.643 bs=1048576 00:24:57.643 iodepth=128 00:24:57.643 norandommap=1 00:24:57.643 numjobs=13 00:24:57.643 00:24:57.643 [job0] 00:24:57.643 filename=/dev/nvme0n1 00:24:57.643 [job1] 00:24:57.643 filename=/dev/nvme1n1 00:24:57.643 [job2] 00:24:57.643 filename=/dev/nvme2n1 00:24:57.643 [job3] 00:24:57.643 filename=/dev/nvme3n1 00:24:57.643 [job4] 00:24:57.643 filename=/dev/nvme4n1 00:24:57.643 [job5] 00:24:57.643 filename=/dev/nvme5n1 00:24:57.643 Could not set queue depth (nvme0n1) 00:24:57.643 Could not set queue depth (nvme1n1) 00:24:57.643 Could not set queue depth (nvme2n1) 00:24:57.643 Could not set queue depth (nvme3n1) 00:24:57.643 Could not set queue depth (nvme4n1) 00:24:57.643 Could not set queue depth (nvme5n1) 00:24:57.901 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.901 ... 00:24:57.901 fio-3.35 00:24:57.901 Starting 78 threads 00:25:12.794 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903566: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=45, BW=45.1MiB/s (47.3MB/s)(564MiB/12509msec) 00:25:12.794 slat (usec): min=475, max=2114.3k, avg=18447.75, stdev=151220.50 00:25:12.794 clat (msec): min=595, max=9109, avg=2725.13, stdev=3243.46 00:25:12.794 lat (msec): min=601, max=9112, avg=2743.58, stdev=3251.22 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 617], 5.00th=[ 667], 10.00th=[ 776], 20.00th=[ 944], 00:25:12.794 | 30.00th=[ 969], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1062], 00:25:12.794 | 70.00th=[ 1099], 80.00th=[ 8557], 90.00th=[ 8926], 95.00th=[ 9060], 00:25:12.794 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.794 | 99.99th=[ 9060] 00:25:12.794 bw ( KiB/s): min= 1957, max=180224, per=2.42%, avg=81352.00, stdev=71063.21, samples=11 00:25:12.794 iops : min= 1, max= 176, avg=79.27, stdev=69.60, samples=11 00:25:12.794 lat (msec) : 750=8.87%, 1000=37.59%, 2000=29.61%, >=2000=23.94% 00:25:12.794 cpu : usr=0.03%, sys=1.28%, ctx=1162, majf=0, minf=32769 00:25:12.794 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:25:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.794 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.794 issued rwts: total=564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903567: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=476, BW=477MiB/s (500MB/s)(4776MiB/10014msec) 00:25:12.794 slat (usec): min=40, max=97606, avg=2088.64, stdev=3593.11 00:25:12.794 clat (msec): min=12, max=4067, avg=254.34, stdev=184.42 00:25:12.794 lat (msec): min=13, max=4117, avg=256.43, stdev=186.00 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 57], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 118], 00:25:12.794 | 30.00th=[ 118], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:25:12.794 | 70.00th=[ 262], 80.00th=[ 296], 90.00th=[ 414], 95.00th=[ 600], 00:25:12.794 | 99.00th=[ 961], 99.50th=[ 995], 99.90th=[ 1020], 99.95th=[ 1020], 00:25:12.794 | 99.99th=[ 4077] 00:25:12.794 bw ( KiB/s): min=108544, max=1110016, per=14.90%, avg=500854.32, stdev=305426.95, samples=19 00:25:12.794 iops : min= 106, max= 1084, avg=489.05, stdev=298.18, samples=19 00:25:12.794 lat (msec) : 20=0.17%, 50=0.69%, 100=1.09%, 250=37.88%, 500=51.40% 00:25:12.794 lat (msec) : 750=5.53%, 1000=2.97%, 2000=0.23%, >=2000=0.04% 00:25:12.794 cpu : usr=0.24%, sys=5.11%, ctx=4558, majf=0, minf=32769 00:25:12.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.794 issued rwts: total=4776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903568: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=1, BW=1912KiB/s (1958kB/s)(23.0MiB/12319msec) 00:25:12.794 slat (msec): min=2, max=2108, avg=444.63, stdev=828.81 00:25:12.794 clat (msec): min=2091, max=12295, avg=8491.58, stdev=2854.45 00:25:12.794 lat (msec): min=4200, max=12318, avg=8936.21, stdev=2597.09 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:25:12.794 | 30.00th=[ 8423], 40.00th=[ 8423], 50.00th=[ 8490], 60.00th=[ 8557], 00:25:12.794 | 70.00th=[10537], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:25:12.794 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:25:12.794 | 99.99th=[12281] 00:25:12.794 lat (msec) : >=2000=100.00% 00:25:12.794 cpu : usr=0.00%, sys=0.13%, ctx=72, majf=0, minf=5889 00:25:12.794 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:25:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.794 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903569: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=4, BW=4828KiB/s (4944kB/s)(49.0MiB/10393msec) 00:25:12.794 slat (usec): min=977, max=2092.7k, avg=210920.26, stdev=605425.10 00:25:12.794 clat (msec): min=56, max=10382, avg=8018.15, stdev=3110.47 00:25:12.794 lat (msec): min=2121, max=10392, avg=8229.07, stdev=2902.86 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 57], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4329], 00:25:12.794 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10268], 60.00th=[10268], 00:25:12.794 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:25:12.794 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.794 | 99.99th=[10402] 00:25:12.794 lat (msec) : 100=2.04%, >=2000=97.96% 00:25:12.794 cpu : usr=0.02%, sys=0.62%, ctx=91, majf=0, minf=12545 00:25:12.794 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:25:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.794 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903570: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=2, BW=2701KiB/s (2766kB/s)(27.0MiB/10237msec) 00:25:12.794 slat (usec): min=1049, max=2110.5k, avg=377243.23, stdev=783508.20 00:25:12.794 clat (msec): min=50, max=10203, avg=5691.01, stdev=3258.80 00:25:12.794 lat (msec): min=2096, max=10236, avg=6068.25, stdev=3169.05 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 51], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2140], 00:25:12.794 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6477], 00:25:12.794 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10134], 95.00th=[10134], 00:25:12.794 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.794 | 99.99th=[10268] 00:25:12.794 lat (msec) : 100=3.70%, >=2000=96.30% 00:25:12.794 cpu : usr=0.00%, sys=0.25%, ctx=58, majf=0, minf=6913 00:25:12.794 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:25:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.794 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.794 job0: (groupid=0, jobs=1): err= 0: pid=2903571: Sun Dec 15 16:12:39 2024 00:25:12.794 read: IOPS=48, BW=48.3MiB/s (50.7MB/s)(603MiB/12476msec) 00:25:12.794 slat (usec): min=456, max=2190.3k, avg=17226.05, stdev=150419.38 00:25:12.794 clat (msec): min=524, max=9092, avg=2508.09, stdev=3238.61 00:25:12.794 lat (msec): min=526, max=9094, avg=2525.31, stdev=3246.91 00:25:12.794 clat percentiles (msec): 00:25:12.794 | 1.00th=[ 535], 5.00th=[ 550], 10.00th=[ 575], 20.00th=[ 592], 00:25:12.794 | 30.00th=[ 634], 40.00th=[ 676], 50.00th=[ 818], 60.00th=[ 1183], 00:25:12.794 | 70.00th=[ 1284], 80.00th=[ 8557], 90.00th=[ 8792], 95.00th=[ 8926], 00:25:12.794 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.794 | 99.99th=[ 9060] 00:25:12.795 bw ( KiB/s): min= 1391, max=221184, per=3.22%, avg=108201.56, stdev=94496.64, samples=9 00:25:12.795 iops : min= 1, max= 216, avg=105.56, stdev=92.27, samples=9 00:25:12.795 lat (msec) : 750=48.92%, 1000=5.47%, 2000=23.38%, >=2000=22.22% 00:25:12.795 cpu : usr=0.03%, sys=1.15%, ctx=1136, majf=0, minf=32769 00:25:12.795 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.6% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.795 issued rwts: total=603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903572: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(169MiB/10471msec) 00:25:12.795 slat (usec): min=172, max=2112.4k, avg=61543.74, stdev=328652.84 00:25:12.795 clat (msec): min=69, max=10357, avg=5590.81, stdev=3544.87 00:25:12.795 lat (msec): min=1866, max=10359, avg=5652.35, stdev=3538.37 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 1871], 5.00th=[ 1871], 10.00th=[ 1921], 20.00th=[ 1989], 00:25:12.795 | 30.00th=[ 2056], 40.00th=[ 2165], 50.00th=[ 6409], 60.00th=[ 8490], 00:25:12.795 | 70.00th=[ 8490], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:25:12.795 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.795 | 99.99th=[10402] 00:25:12.795 bw ( KiB/s): min=22528, max=61440, per=1.25%, avg=41984.00, stdev=27514.94, samples=2 00:25:12.795 iops : min= 22, max= 60, avg=41.00, stdev=26.87, samples=2 00:25:12.795 lat (msec) : 100=0.59%, 2000=26.04%, >=2000=73.37% 00:25:12.795 cpu : usr=0.00%, sys=1.41%, ctx=160, majf=0, minf=32332 00:25:12.795 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.5%, 32=18.9%, >=64=62.7% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:25:12.795 issued rwts: total=169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903573: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=20, BW=20.5MiB/s (21.4MB/s)(256MiB/12515msec) 00:25:12.795 slat (usec): min=71, max=2108.3k, avg=40710.81, stdev=254218.05 00:25:12.795 clat (msec): min=563, max=10398, avg=5814.43, stdev=4152.61 00:25:12.795 lat (msec): min=572, max=10401, avg=5855.14, stdev=4147.18 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 567], 5.00th=[ 575], 10.00th=[ 584], 20.00th=[ 2056], 00:25:12.795 | 30.00th=[ 2140], 40.00th=[ 2198], 50.00th=[ 4212], 60.00th=[ 9866], 00:25:12.795 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10268], 95.00th=[10402], 00:25:12.795 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.795 | 99.99th=[10402] 00:25:12.795 bw ( KiB/s): min= 2048, max=172032, per=1.12%, avg=37740.43, stdev=64730.29, samples=7 00:25:12.795 iops : min= 2, max= 168, avg=36.71, stdev=63.30, samples=7 00:25:12.795 lat (msec) : 750=14.06%, 2000=1.17%, >=2000=84.77% 00:25:12.795 cpu : usr=0.00%, sys=1.16%, ctx=384, majf=0, minf=32769 00:25:12.795 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:12.795 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903574: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=2, BW=2888KiB/s (2957kB/s)(35.0MiB/12412msec) 00:25:12.795 slat (usec): min=878, max=2104.4k, avg=294838.14, stdev=704909.86 00:25:12.795 clat (msec): min=2091, max=12391, avg=9848.36, stdev=3176.08 00:25:12.795 lat (msec): min=4195, max=12411, avg=10143.20, stdev=2901.98 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:25:12.795 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[12281], 60.00th=[12281], 00:25:12.795 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:25:12.795 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:25:12.795 | 99.99th=[12416] 00:25:12.795 lat (msec) : >=2000=100.00% 00:25:12.795 cpu : usr=0.00%, sys=0.30%, ctx=75, majf=0, minf=8961 00:25:12.795 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.795 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903575: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=2, BW=2065KiB/s (2115kB/s)(25.0MiB/12395msec) 00:25:12.795 slat (msec): min=6, max=2097, avg=412.16, stdev=808.45 00:25:12.795 clat (msec): min=2090, max=12388, avg=8942.26, stdev=3403.11 00:25:12.795 lat (msec): min=4188, max=12394, avg=9354.42, stdev=3153.57 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:25:12.795 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:25:12.795 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:25:12.795 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:25:12.795 | 99.99th=[12416] 00:25:12.795 lat (msec) : >=2000=100.00% 00:25:12.795 cpu : usr=0.00%, sys=0.21%, ctx=71, majf=0, minf=6401 00:25:12.795 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.795 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903576: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=6, BW=6378KiB/s (6531kB/s)(65.0MiB/10436msec) 00:25:12.795 slat (usec): min=836, max=2117.1k, avg=159863.57, stdev=534171.50 00:25:12.795 clat (msec): min=43, max=10431, avg=8937.21, stdev=2675.47 00:25:12.795 lat (msec): min=2120, max=10435, avg=9097.07, stdev=2435.45 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 44], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:25:12.795 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10402], 00:25:12.795 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:25:12.795 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.795 | 99.99th=[10402] 00:25:12.795 lat (msec) : 50=1.54%, >=2000=98.46% 00:25:12.795 cpu : usr=0.00%, sys=0.80%, ctx=108, majf=0, minf=16641 00:25:12.795 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:12.795 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903577: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=2, BW=2978KiB/s (3050kB/s)(30.0MiB/10314msec) 00:25:12.795 slat (usec): min=1056, max=2108.2k, avg=342140.88, stdev=749435.61 00:25:12.795 clat (msec): min=49, max=10296, avg=6048.37, stdev=3307.33 00:25:12.795 lat (msec): min=2114, max=10313, avg=6390.51, stdev=3194.29 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 50], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2140], 00:25:12.795 | 30.00th=[ 2198], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 6477], 00:25:12.795 | 70.00th=[ 8557], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:25:12.795 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.795 | 99.99th=[10268] 00:25:12.795 lat (msec) : 50=3.33%, >=2000=96.67% 00:25:12.795 cpu : usr=0.01%, sys=0.30%, ctx=72, majf=0, minf=7681 00:25:12.795 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.795 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job0: (groupid=0, jobs=1): err= 0: pid=2903578: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=138, BW=139MiB/s (145MB/s)(1421MiB/10256msec) 00:25:12.795 slat (usec): min=44, max=2046.1k, avg=7152.99, stdev=56265.42 00:25:12.795 clat (msec): min=85, max=3077, avg=885.89, stdev=573.64 00:25:12.795 lat (msec): min=515, max=3078, avg=893.04, stdev=575.15 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 518], 5.00th=[ 518], 10.00th=[ 523], 20.00th=[ 531], 00:25:12.795 | 30.00th=[ 558], 40.00th=[ 600], 50.00th=[ 651], 60.00th=[ 902], 00:25:12.795 | 70.00th=[ 919], 80.00th=[ 978], 90.00th=[ 1028], 95.00th=[ 2534], 00:25:12.795 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3071], 99.95th=[ 3071], 00:25:12.795 | 99.99th=[ 3071] 00:25:12.795 bw ( KiB/s): min=26624, max=249856, per=4.92%, avg=165461.56, stdev=59948.54, samples=16 00:25:12.795 iops : min= 26, max= 244, avg=161.50, stdev=58.54, samples=16 00:25:12.795 lat (msec) : 100=0.07%, 750=53.48%, 1000=33.43%, 2000=4.08%, >=2000=8.94% 00:25:12.795 cpu : usr=0.04%, sys=2.06%, ctx=1294, majf=0, minf=32769 00:25:12.795 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.795 issued rwts: total=1421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.795 job1: (groupid=0, jobs=1): err= 0: pid=2903579: Sun Dec 15 16:12:39 2024 00:25:12.795 read: IOPS=35, BW=35.1MiB/s (36.8MB/s)(436MiB/12418msec) 00:25:12.795 slat (usec): min=43, max=2142.0k, avg=23698.84, stdev=191254.35 00:25:12.795 clat (msec): min=424, max=10800, avg=3504.53, stdev=4319.56 00:25:12.795 lat (msec): min=428, max=10801, avg=3528.23, stdev=4330.82 00:25:12.795 clat percentiles (msec): 00:25:12.795 | 1.00th=[ 439], 5.00th=[ 485], 10.00th=[ 558], 20.00th=[ 617], 00:25:12.795 | 30.00th=[ 625], 40.00th=[ 625], 50.00th=[ 651], 60.00th=[ 659], 00:25:12.795 | 70.00th=[ 4665], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:25:12.795 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:12.795 | 99.99th=[10805] 00:25:12.795 bw ( KiB/s): min= 1587, max=272384, per=2.69%, avg=90338.71, stdev=110187.74, samples=7 00:25:12.795 iops : min= 1, max= 266, avg=88.14, stdev=107.68, samples=7 00:25:12.795 lat (msec) : 500=5.50%, 750=61.01%, >=2000=33.49% 00:25:12.795 cpu : usr=0.02%, sys=1.31%, ctx=430, majf=0, minf=32769 00:25:12.795 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:25:12.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.795 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:12.796 issued rwts: total=436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903580: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=2, BW=2233KiB/s (2287kB/s)(27.0MiB/12382msec) 00:25:12.796 slat (usec): min=851, max=2156.1k, avg=381896.72, stdev=804476.53 00:25:12.796 clat (msec): min=2070, max=12380, avg=9073.83, stdev=2985.33 00:25:12.796 lat (msec): min=4226, max=12381, avg=9455.73, stdev=2700.91 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 2072], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6342], 00:25:12.796 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:25:12.796 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:25:12.796 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:25:12.796 | 99.99th=[12416] 00:25:12.796 lat (msec) : >=2000=100.00% 00:25:12.796 cpu : usr=0.02%, sys=0.19%, ctx=49, majf=0, minf=6913 00:25:12.796 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.796 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903581: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=99, BW=99.6MiB/s (104MB/s)(1021MiB/10249msec) 00:25:12.796 slat (usec): min=438, max=2191.4k, avg=9972.05, stdev=105585.00 00:25:12.796 clat (msec): min=64, max=4828, avg=1056.79, stdev=1342.63 00:25:12.796 lat (msec): min=481, max=4831, avg=1066.77, stdev=1346.47 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 481], 5.00th=[ 485], 10.00th=[ 485], 20.00th=[ 489], 00:25:12.796 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 518], 60.00th=[ 527], 00:25:12.796 | 70.00th=[ 600], 80.00th=[ 667], 90.00th=[ 4463], 95.00th=[ 4665], 00:25:12.796 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:12.796 | 99.99th=[ 4799] 00:25:12.796 bw ( KiB/s): min= 2048, max=266240, per=6.04%, avg=203148.00, stdev=80816.98, samples=9 00:25:12.796 iops : min= 2, max= 260, avg=198.33, stdev=78.88, samples=9 00:25:12.796 lat (msec) : 100=0.10%, 500=43.49%, 750=42.70%, >=2000=13.71% 00:25:12.796 cpu : usr=0.04%, sys=1.17%, ctx=2151, majf=0, minf=32769 00:25:12.796 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.796 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903582: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=152, BW=152MiB/s (160MB/s)(1570MiB/10313msec) 00:25:12.796 slat (usec): min=40, max=2103.1k, avg=6524.11, stdev=73995.10 00:25:12.796 clat (msec): min=64, max=4792, avg=813.59, stdev=1094.50 00:25:12.796 lat (msec): min=362, max=4798, avg=820.11, stdev=1097.98 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 363], 5.00th=[ 363], 10.00th=[ 368], 20.00th=[ 384], 00:25:12.796 | 30.00th=[ 460], 40.00th=[ 489], 50.00th=[ 493], 60.00th=[ 502], 00:25:12.796 | 70.00th=[ 531], 80.00th=[ 600], 90.00th=[ 709], 95.00th=[ 4463], 00:25:12.796 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:12.796 | 99.99th=[ 4799] 00:25:12.796 bw ( KiB/s): min= 2048, max=360448, per=6.27%, avg=210880.29, stdev=115376.10, samples=14 00:25:12.796 iops : min= 2, max= 352, avg=205.86, stdev=112.66, samples=14 00:25:12.796 lat (msec) : 100=0.06%, 500=59.75%, 750=31.53%, >=2000=8.66% 00:25:12.796 cpu : usr=0.03%, sys=2.01%, ctx=2600, majf=0, minf=32770 00:25:12.796 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.796 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903583: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(178MiB/12453msec) 00:25:12.796 slat (usec): min=1090, max=2127.3k, avg=58258.43, stdev=299342.72 00:25:12.796 clat (msec): min=1602, max=10644, avg=6308.23, stdev=2103.61 00:25:12.796 lat (msec): min=1620, max=12363, avg=6366.49, stdev=2123.78 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 1620], 5.00th=[ 1737], 10.00th=[ 1905], 20.00th=[ 5134], 00:25:12.796 | 30.00th=[ 5470], 40.00th=[ 6678], 50.00th=[ 7080], 60.00th=[ 7483], 00:25:12.796 | 70.00th=[ 7752], 80.00th=[ 8020], 90.00th=[ 8087], 95.00th=[ 8221], 00:25:12.796 | 99.00th=[ 8557], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:12.796 | 99.99th=[10671] 00:25:12.796 bw ( KiB/s): min= 1436, max=40878, per=0.62%, avg=20734.40, stdev=18792.36, samples=5 00:25:12.796 iops : min= 1, max= 39, avg=19.80, stdev=17.96, samples=5 00:25:12.796 lat (msec) : 2000=12.92%, >=2000=87.08% 00:25:12.796 cpu : usr=0.01%, sys=0.85%, ctx=708, majf=0, minf=32769 00:25:12.796 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=9.0%, 32=18.0%, >=64=64.6% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:25:12.796 issued rwts: total=178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903584: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=4, BW=4423KiB/s (4529kB/s)(54.0MiB/12501msec) 00:25:12.796 slat (usec): min=1240, max=2100.3k, avg=192611.37, stdev=584583.36 00:25:12.796 clat (msec): min=2099, max=12498, avg=11025.83, stdev=2708.91 00:25:12.796 lat (msec): min=4174, max=12500, avg=11218.44, stdev=2416.19 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:25:12.796 | 30.00th=[12416], 40.00th=[12416], 50.00th=[12416], 60.00th=[12416], 00:25:12.796 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12550], 95.00th=[12550], 00:25:12.796 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:25:12.796 | 99.99th=[12550] 00:25:12.796 lat (msec) : >=2000=100.00% 00:25:12.796 cpu : usr=0.00%, sys=0.58%, ctx=84, majf=0, minf=13825 00:25:12.796 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.796 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903585: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=2, BW=2311KiB/s (2367kB/s)(28.0MiB/12405msec) 00:25:12.796 slat (usec): min=1145, max=2117.8k, avg=368305.99, stdev=773403.61 00:25:12.796 clat (msec): min=2091, max=12402, avg=8813.47, stdev=3676.24 00:25:12.796 lat (msec): min=4182, max=12404, avg=9181.78, stdev=3489.76 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4212], 00:25:12.796 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12147], 00:25:12.796 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:25:12.796 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:25:12.796 | 99.99th=[12416] 00:25:12.796 lat (msec) : >=2000=100.00% 00:25:12.796 cpu : usr=0.01%, sys=0.26%, ctx=71, majf=0, minf=7169 00:25:12.796 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.796 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903586: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=4, BW=4199KiB/s (4300kB/s)(51.0MiB/12436msec) 00:25:12.796 slat (usec): min=1911, max=2097.4k, avg=202838.46, stdev=593280.43 00:25:12.796 clat (msec): min=2090, max=12428, avg=9494.86, stdev=3320.43 00:25:12.796 lat (msec): min=4169, max=12435, avg=9697.70, stdev=3171.70 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:25:12.796 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12147], 00:25:12.796 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:25:12.796 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:25:12.796 | 99.99th=[12416] 00:25:12.796 lat (msec) : >=2000=100.00% 00:25:12.796 cpu : usr=0.01%, sys=0.50%, ctx=87, majf=0, minf=13057 00:25:12.796 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.796 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903587: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=2, BW=2312KiB/s (2368kB/s)(28.0MiB/12400msec) 00:25:12.796 slat (usec): min=633, max=2092.3k, avg=368283.16, stdev=763268.32 00:25:12.796 clat (msec): min=2087, max=12248, avg=8849.46, stdev=2919.74 00:25:12.796 lat (msec): min=4179, max=12399, avg=9217.75, stdev=2675.36 00:25:12.796 clat percentiles (msec): 00:25:12.796 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:25:12.796 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:25:12.796 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:25:12.796 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:25:12.796 | 99.99th=[12281] 00:25:12.796 lat (msec) : >=2000=100.00% 00:25:12.796 cpu : usr=0.00%, sys=0.15%, ctx=70, majf=0, minf=7169 00:25:12.796 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:25:12.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.796 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.796 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.796 job1: (groupid=0, jobs=1): err= 0: pid=2903588: Sun Dec 15 16:12:39 2024 00:25:12.796 read: IOPS=60, BW=60.1MiB/s (63.0MB/s)(616MiB/10255msec) 00:25:12.796 slat (usec): min=52, max=2032.9k, avg=16534.31, stdev=130375.36 00:25:12.796 clat (msec): min=63, max=4544, avg=1462.68, stdev=942.44 00:25:12.796 lat (msec): min=901, max=4548, avg=1479.21, stdev=948.97 00:25:12.796 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 902], 5.00th=[ 911], 10.00th=[ 911], 20.00th=[ 919], 00:25:12.797 | 30.00th=[ 919], 40.00th=[ 927], 50.00th=[ 927], 60.00th=[ 961], 00:25:12.797 | 70.00th=[ 978], 80.00th=[ 2534], 90.00th=[ 2970], 95.00th=[ 3071], 00:25:12.797 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:12.797 | 99.99th=[ 4530] 00:25:12.797 bw ( KiB/s): min= 6144, max=149205, per=3.30%, avg=110982.00, stdev=59610.88, samples=9 00:25:12.797 iops : min= 6, max= 145, avg=108.22, stdev=58.11, samples=9 00:25:12.797 lat (msec) : 100=0.16%, 1000=71.59%, 2000=1.46%, >=2000=26.79% 00:25:12.797 cpu : usr=0.06%, sys=1.90%, ctx=530, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.797 issued rwts: total=616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job1: (groupid=0, jobs=1): err= 0: pid=2903589: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=5, BW=5338KiB/s (5466kB/s)(54.0MiB/10359msec) 00:25:12.797 slat (usec): min=902, max=2105.3k, avg=190781.55, stdev=580414.30 00:25:12.797 clat (msec): min=56, max=10352, avg=7680.28, stdev=2803.79 00:25:12.797 lat (msec): min=2130, max=10358, avg=7871.06, stdev=2619.70 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 57], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 4329], 00:25:12.797 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[ 8658], 00:25:12.797 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:25:12.797 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.797 | 99.99th=[10402] 00:25:12.797 lat (msec) : 100=1.85%, >=2000=98.15% 00:25:12.797 cpu : usr=0.00%, sys=0.55%, ctx=81, majf=0, minf=13825 00:25:12.797 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.797 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job1: (groupid=0, jobs=1): err= 0: pid=2903590: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(643MiB/10069msec) 00:25:12.797 slat (usec): min=63, max=2116.8k, avg=15564.95, stdev=107677.84 00:25:12.797 clat (msec): min=56, max=6129, avg=1108.10, stdev=876.66 00:25:12.797 lat (msec): min=86, max=7840, avg=1123.67, stdev=911.94 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 106], 5.00th=[ 243], 10.00th=[ 405], 20.00th=[ 651], 00:25:12.797 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 684], 60.00th=[ 802], 00:25:12.797 | 70.00th=[ 1368], 80.00th=[ 1938], 90.00th=[ 2165], 95.00th=[ 2198], 00:25:12.797 | 99.00th=[ 5738], 99.50th=[ 5805], 99.90th=[ 6141], 99.95th=[ 6141], 00:25:12.797 | 99.99th=[ 6141] 00:25:12.797 bw ( KiB/s): min=34816, max=198656, per=3.49%, avg=117418.67, stdev=69201.49, samples=9 00:25:12.797 iops : min= 34, max= 194, avg=114.67, stdev=67.58, samples=9 00:25:12.797 lat (msec) : 100=0.93%, 250=4.35%, 500=7.78%, 750=42.61%, 1000=9.18% 00:25:12.797 lat (msec) : 2000=16.49%, >=2000=18.66% 00:25:12.797 cpu : usr=0.03%, sys=1.80%, ctx=889, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.797 issued rwts: total=643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job1: (groupid=0, jobs=1): err= 0: pid=2903591: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(259MiB/12431msec) 00:25:12.797 slat (usec): min=602, max=2124.9k, avg=39904.88, stdev=205291.97 00:25:12.797 clat (msec): min=1915, max=6445, avg=4319.83, stdev=1630.14 00:25:12.797 lat (msec): min=1942, max=6511, avg=4359.74, stdev=1621.32 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 1938], 5.00th=[ 2072], 10.00th=[ 2232], 20.00th=[ 2400], 00:25:12.797 | 30.00th=[ 2467], 40.00th=[ 4597], 50.00th=[ 5000], 60.00th=[ 5336], 00:25:12.797 | 70.00th=[ 5671], 80.00th=[ 5940], 90.00th=[ 6141], 95.00th=[ 6208], 00:25:12.797 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6477], 99.95th=[ 6477], 00:25:12.797 | 99.99th=[ 6477] 00:25:12.797 bw ( KiB/s): min= 1481, max=86016, per=1.15%, avg=38533.71, stdev=34220.19, samples=7 00:25:12.797 iops : min= 1, max= 84, avg=37.43, stdev=33.60, samples=7 00:25:12.797 lat (msec) : 2000=2.70%, >=2000=97.30% 00:25:12.797 cpu : usr=0.03%, sys=1.15%, ctx=575, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.2%, 32=12.4%, >=64=75.7% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:12.797 issued rwts: total=259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job2: (groupid=0, jobs=1): err= 0: pid=2903592: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=110, BW=110MiB/s (116MB/s)(1112MiB/10075msec) 00:25:12.797 slat (usec): min=44, max=128630, avg=8992.30, stdev=14296.35 00:25:12.797 clat (msec): min=67, max=5796, avg=1072.47, stdev=498.51 00:25:12.797 lat (msec): min=110, max=5819, avg=1081.47, stdev=501.13 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 207], 5.00th=[ 430], 10.00th=[ 439], 20.00th=[ 575], 00:25:12.797 | 30.00th=[ 860], 40.00th=[ 911], 50.00th=[ 1045], 60.00th=[ 1234], 00:25:12.797 | 70.00th=[ 1401], 80.00th=[ 1485], 90.00th=[ 1603], 95.00th=[ 1703], 00:25:12.797 | 99.00th=[ 1938], 99.50th=[ 3071], 99.90th=[ 5671], 99.95th=[ 5805], 00:25:12.797 | 99.99th=[ 5805] 00:25:12.797 bw ( KiB/s): min=18432, max=271840, per=3.33%, avg=112017.89, stdev=64674.04, samples=18 00:25:12.797 iops : min= 18, max= 265, avg=109.22, stdev=63.13, samples=18 00:25:12.797 lat (msec) : 100=0.09%, 250=1.44%, 500=13.40%, 750=12.59%, 1000=20.41% 00:25:12.797 lat (msec) : 2000=51.17%, >=2000=0.90% 00:25:12.797 cpu : usr=0.03%, sys=2.23%, ctx=2130, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.797 issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job2: (groupid=0, jobs=1): err= 0: pid=2903593: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(385MiB/10350msec) 00:25:12.797 slat (usec): min=450, max=2064.3k, avg=26702.88, stdev=192523.34 00:25:12.797 clat (msec): min=66, max=10212, avg=3235.19, stdev=3180.80 00:25:12.797 lat (msec): min=493, max=10217, avg=3261.90, stdev=3183.01 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 489], 5.00th=[ 514], 10.00th=[ 558], 20.00th=[ 902], 00:25:12.797 | 30.00th=[ 953], 40.00th=[ 1053], 50.00th=[ 1099], 60.00th=[ 1821], 00:25:12.797 | 70.00th=[ 6007], 80.00th=[ 7886], 90.00th=[ 8020], 95.00th=[ 8087], 00:25:12.797 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.797 | 99.99th=[10268] 00:25:12.797 bw ( KiB/s): min= 2048, max=253445, per=1.96%, avg=65728.62, stdev=86921.82, samples=8 00:25:12.797 iops : min= 2, max= 247, avg=64.12, stdev=84.73, samples=8 00:25:12.797 lat (msec) : 100=0.26%, 500=2.86%, 750=10.39%, 1000=20.78%, 2000=27.53% 00:25:12.797 lat (msec) : >=2000=38.18% 00:25:12.797 cpu : usr=0.01%, sys=1.03%, ctx=949, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:12.797 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job2: (groupid=0, jobs=1): err= 0: pid=2903594: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=51, BW=51.8MiB/s (54.3MB/s)(532MiB/10280msec) 00:25:12.797 slat (usec): min=537, max=2172.2k, avg=19194.56, stdev=157804.92 00:25:12.797 clat (msec): min=65, max=7224, avg=2289.66, stdev=2559.31 00:25:12.797 lat (msec): min=560, max=7228, avg=2308.86, stdev=2563.43 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 558], 5.00th=[ 584], 10.00th=[ 600], 20.00th=[ 634], 00:25:12.797 | 30.00th=[ 718], 40.00th=[ 743], 50.00th=[ 894], 60.00th=[ 1167], 00:25:12.797 | 70.00th=[ 1351], 80.00th=[ 6611], 90.00th=[ 6946], 95.00th=[ 7148], 00:25:12.797 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:25:12.797 | 99.99th=[ 7215] 00:25:12.797 bw ( KiB/s): min= 2048, max=221184, per=2.74%, avg=91931.44, stdev=87206.29, samples=9 00:25:12.797 iops : min= 2, max= 216, avg=89.67, stdev=85.29, samples=9 00:25:12.797 lat (msec) : 100=0.19%, 750=40.79%, 1000=10.53%, 2000=23.87%, >=2000=24.62% 00:25:12.797 cpu : usr=0.05%, sys=1.18%, ctx=1610, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.797 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job2: (groupid=0, jobs=1): err= 0: pid=2903595: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=48, BW=48.1MiB/s (50.4MB/s)(494MiB/10276msec) 00:25:12.797 slat (usec): min=42, max=2167.3k, avg=20663.62, stdev=150270.59 00:25:12.797 clat (msec): min=64, max=6658, avg=2454.64, stdev=2002.50 00:25:12.797 lat (msec): min=723, max=6658, avg=2475.31, stdev=2005.86 00:25:12.797 clat percentiles (msec): 00:25:12.797 | 1.00th=[ 751], 5.00th=[ 860], 10.00th=[ 1062], 20.00th=[ 1133], 00:25:12.797 | 30.00th=[ 1183], 40.00th=[ 1301], 50.00th=[ 1368], 60.00th=[ 1435], 00:25:12.797 | 70.00th=[ 1972], 80.00th=[ 5000], 90.00th=[ 6544], 95.00th=[ 6611], 00:25:12.797 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:25:12.797 | 99.99th=[ 6678] 00:25:12.797 bw ( KiB/s): min= 4096, max=163840, per=2.23%, avg=74956.80, stdev=54946.82, samples=10 00:25:12.797 iops : min= 4, max= 160, avg=73.20, stdev=53.66, samples=10 00:25:12.797 lat (msec) : 100=0.20%, 750=1.01%, 1000=6.28%, 2000=62.55%, >=2000=29.96% 00:25:12.797 cpu : usr=0.07%, sys=1.17%, ctx=1307, majf=0, minf=32769 00:25:12.797 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:25:12.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.797 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:12.797 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.797 job2: (groupid=0, jobs=1): err= 0: pid=2903596: Sun Dec 15 16:12:39 2024 00:25:12.797 read: IOPS=86, BW=86.8MiB/s (91.0MB/s)(904MiB/10418msec) 00:25:12.797 slat (usec): min=43, max=2055.2k, avg=11442.56, stdev=96734.37 00:25:12.798 clat (msec): min=67, max=6514, avg=1381.82, stdev=1602.66 00:25:12.798 lat (msec): min=376, max=6523, avg=1393.26, stdev=1609.07 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 376], 5.00th=[ 380], 10.00th=[ 380], 20.00th=[ 384], 00:25:12.798 | 30.00th=[ 397], 40.00th=[ 481], 50.00th=[ 609], 60.00th=[ 785], 00:25:12.798 | 70.00th=[ 1301], 80.00th=[ 1972], 90.00th=[ 4732], 95.00th=[ 5403], 00:25:12.798 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6544], 99.95th=[ 6544], 00:25:12.798 | 99.99th=[ 6544] 00:25:12.798 bw ( KiB/s): min= 4104, max=342016, per=3.94%, avg=132438.00, stdev=117953.19, samples=12 00:25:12.798 iops : min= 4, max= 334, avg=129.33, stdev=115.19, samples=12 00:25:12.798 lat (msec) : 100=0.11%, 500=41.59%, 750=17.26%, 1000=5.42%, 2000=17.37% 00:25:12.798 lat (msec) : >=2000=18.25% 00:25:12.798 cpu : usr=0.04%, sys=2.11%, ctx=1282, majf=0, minf=32769 00:25:12.798 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.798 issued rwts: total=904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903597: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=53, BW=53.9MiB/s (56.5MB/s)(558MiB/10356msec) 00:25:12.798 slat (usec): min=38, max=2113.5k, avg=17923.76, stdev=116203.19 00:25:12.798 clat (msec): min=350, max=5903, avg=2234.07, stdev=1937.27 00:25:12.798 lat (msec): min=384, max=5916, avg=2252.00, stdev=1944.70 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 409], 5.00th=[ 584], 10.00th=[ 676], 20.00th=[ 860], 00:25:12.798 | 30.00th=[ 1099], 40.00th=[ 1250], 50.00th=[ 1334], 60.00th=[ 1603], 00:25:12.798 | 70.00th=[ 1754], 80.00th=[ 5738], 90.00th=[ 5738], 95.00th=[ 5805], 00:25:12.798 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:25:12.798 | 99.99th=[ 5873] 00:25:12.798 bw ( KiB/s): min= 6144, max=153600, per=2.19%, avg=73512.17, stdev=43128.28, samples=12 00:25:12.798 iops : min= 6, max= 150, avg=71.58, stdev=42.04, samples=12 00:25:12.798 lat (msec) : 500=2.69%, 750=11.29%, 1000=11.83%, 2000=50.36%, >=2000=23.84% 00:25:12.798 cpu : usr=0.05%, sys=1.63%, ctx=1300, majf=0, minf=32769 00:25:12.798 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.798 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903598: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=21, BW=21.4MiB/s (22.5MB/s)(221MiB/10317msec) 00:25:12.798 slat (usec): min=954, max=2139.5k, avg=46382.32, stdev=264015.62 00:25:12.798 clat (msec): min=64, max=10191, avg=4739.04, stdev=2681.88 00:25:12.798 lat (msec): min=1226, max=10209, avg=4785.43, stdev=2677.33 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 1217], 5.00th=[ 1267], 10.00th=[ 1351], 20.00th=[ 1401], 00:25:12.798 | 30.00th=[ 1485], 40.00th=[ 3205], 50.00th=[ 6544], 60.00th=[ 6745], 00:25:12.798 | 70.00th=[ 6946], 80.00th=[ 7080], 90.00th=[ 7349], 95.00th=[ 7483], 00:25:12.798 | 99.00th=[ 8658], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:12.798 | 99.99th=[10134] 00:25:12.798 bw ( KiB/s): min= 2048, max=91976, per=1.13%, avg=38056.00, stdev=45708.30, samples=5 00:25:12.798 iops : min= 2, max= 89, avg=37.00, stdev=44.40, samples=5 00:25:12.798 lat (msec) : 100=0.45%, 2000=31.67%, >=2000=67.87% 00:25:12.798 cpu : usr=0.00%, sys=1.08%, ctx=583, majf=0, minf=32769 00:25:12.798 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:25:12.798 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903599: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=3, BW=3745KiB/s (3835kB/s)(45.0MiB/12303msec) 00:25:12.798 slat (usec): min=624, max=2090.7k, avg=226666.15, stdev=629614.31 00:25:12.798 clat (msec): min=2101, max=12275, avg=7619.14, stdev=2921.31 00:25:12.798 lat (msec): min=4171, max=12302, avg=7845.81, stdev=2878.90 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:25:12.798 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:25:12.798 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:25:12.798 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:25:12.798 | 99.99th=[12281] 00:25:12.798 lat (msec) : >=2000=100.00% 00:25:12.798 cpu : usr=0.02%, sys=0.30%, ctx=47, majf=0, minf=11521 00:25:12.798 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.798 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903600: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=1, BW=1597KiB/s (1635kB/s)(16.0MiB/10262msec) 00:25:12.798 slat (msec): min=2, max=2161, avg=637.15, stdev=955.68 00:25:12.798 clat (msec): min=67, max=10190, avg=6208.63, stdev=3318.19 00:25:12.798 lat (msec): min=2109, max=10261, avg=6845.78, stdev=3026.20 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 67], 5.00th=[ 67], 10.00th=[ 2106], 20.00th=[ 2140], 00:25:12.798 | 30.00th=[ 4279], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:25:12.798 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10134], 95.00th=[10134], 00:25:12.798 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:12.798 | 99.99th=[10134] 00:25:12.798 lat (msec) : 100=6.25%, >=2000=93.75% 00:25:12.798 cpu : usr=0.00%, sys=0.13%, ctx=56, majf=0, minf=4097 00:25:12.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903601: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=3, BW=3981KiB/s (4077kB/s)(40.0MiB/10288msec) 00:25:12.798 slat (usec): min=516, max=2128.9k, avg=255562.52, stdev=661204.38 00:25:12.798 clat (msec): min=65, max=10282, avg=8358.76, stdev=2925.60 00:25:12.798 lat (msec): min=2121, max=10287, avg=8614.32, stdev=2612.26 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 66], 5.00th=[ 2123], 10.00th=[ 2198], 20.00th=[ 4329], 00:25:12.798 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[10134], 60.00th=[10134], 00:25:12.798 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:25:12.798 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.798 | 99.99th=[10268] 00:25:12.798 lat (msec) : 100=2.50%, >=2000=97.50% 00:25:12.798 cpu : usr=0.00%, sys=0.41%, ctx=81, majf=0, minf=10241 00:25:12.798 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.798 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903602: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=1, BW=1295KiB/s (1326kB/s)(13.0MiB/10281msec) 00:25:12.798 slat (msec): min=11, max=2128, avg=785.86, stdev=998.41 00:25:12.798 clat (msec): min=64, max=10197, avg=4774.11, stdev=3089.00 00:25:12.798 lat (msec): min=2120, max=10280, avg=5559.97, stdev=3090.46 00:25:12.798 clat percentiles (msec): 00:25:12.798 | 1.00th=[ 65], 5.00th=[ 65], 10.00th=[ 2123], 20.00th=[ 2165], 00:25:12.798 | 30.00th=[ 2165], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 4329], 00:25:12.798 | 70.00th=[ 6544], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10134], 00:25:12.798 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:12.798 | 99.99th=[10134] 00:25:12.798 lat (msec) : 100=7.69%, >=2000=92.31% 00:25:12.798 cpu : usr=0.00%, sys=0.11%, ctx=65, majf=0, minf=3329 00:25:12.798 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.798 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.798 job2: (groupid=0, jobs=1): err= 0: pid=2903603: Sun Dec 15 16:12:39 2024 00:25:12.798 read: IOPS=3, BW=3203KiB/s (3280kB/s)(32.0MiB/10229msec) 00:25:12.798 slat (msec): min=3, max=2092, avg=317.47, stdev=724.76 00:25:12.798 clat (msec): min=68, max=10213, avg=5448.87, stdev=2893.11 00:25:12.798 lat (msec): min=2095, max=10228, avg=5766.34, stdev=2840.63 00:25:12.798 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 69], 5.00th=[ 2089], 10.00th=[ 2140], 20.00th=[ 2165], 00:25:12.799 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:25:12.799 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10134], 00:25:12.799 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.799 | 99.99th=[10268] 00:25:12.799 lat (msec) : 100=3.12%, >=2000=96.88% 00:25:12.799 cpu : usr=0.00%, sys=0.32%, ctx=65, majf=0, minf=8193 00:25:12.799 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.799 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job2: (groupid=0, jobs=1): err= 0: pid=2903604: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=8, BW=8560KiB/s (8765kB/s)(87.0MiB/10408msec) 00:25:12.799 slat (usec): min=923, max=2085.1k, avg=118810.03, stdev=460089.94 00:25:12.799 clat (msec): min=70, max=10405, avg=8710.72, stdev=2832.81 00:25:12.799 lat (msec): min=2094, max=10407, avg=8829.53, stdev=2678.81 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 71], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:25:12.799 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:25:12.799 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:25:12.799 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.799 | 99.99th=[10402] 00:25:12.799 lat (msec) : 100=1.15%, >=2000=98.85% 00:25:12.799 cpu : usr=0.00%, sys=1.09%, ctx=122, majf=0, minf=22273 00:25:12.799 IO depths : 1=1.1%, 2=2.3%, 4=4.6%, 8=9.2%, 16=18.4%, 32=36.8%, >=64=27.6% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:12.799 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903605: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=99, BW=99.4MiB/s (104MB/s)(995MiB/10014msec) 00:25:12.799 slat (usec): min=41, max=2133.3k, avg=10044.97, stdev=123979.92 00:25:12.799 clat (msec): min=12, max=8351, avg=434.35, stdev=1207.58 00:25:12.799 lat (msec): min=13, max=8356, avg=444.40, stdev=1233.39 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 21], 5.00th=[ 58], 10.00th=[ 150], 20.00th=[ 247], 00:25:12.799 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 249], 60.00th=[ 251], 00:25:12.799 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 253], 95.00th=[ 257], 00:25:12.799 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:25:12.799 | 99.99th=[ 8356] 00:25:12.799 bw ( KiB/s): min=163840, max=524288, per=12.00%, avg=403456.00, stdev=207516.07, samples=3 00:25:12.799 iops : min= 160, max= 512, avg=394.00, stdev=202.65, samples=3 00:25:12.799 lat (msec) : 20=0.90%, 50=3.62%, 100=3.12%, 250=52.76%, 500=36.28% 00:25:12.799 lat (msec) : >=2000=3.32% 00:25:12.799 cpu : usr=0.08%, sys=1.88%, ctx=897, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.799 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903606: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=24, BW=24.4MiB/s (25.6MB/s)(252MiB/10311msec) 00:25:12.799 slat (usec): min=415, max=2055.1k, avg=40674.83, stdev=241106.05 00:25:12.799 clat (msec): min=58, max=9172, avg=4865.90, stdev=3436.47 00:25:12.799 lat (msec): min=1039, max=9183, avg=4906.57, stdev=3429.82 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 1028], 5.00th=[ 1045], 10.00th=[ 1053], 20.00th=[ 1099], 00:25:12.799 | 30.00th=[ 1183], 40.00th=[ 2106], 50.00th=[ 5000], 60.00th=[ 8087], 00:25:12.799 | 70.00th=[ 8423], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:25:12.799 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:25:12.799 | 99.99th=[ 9194] 00:25:12.799 bw ( KiB/s): min= 6144, max=92160, per=1.08%, avg=36278.86, stdev=33354.00, samples=7 00:25:12.799 iops : min= 6, max= 90, avg=35.43, stdev=32.57, samples=7 00:25:12.799 lat (msec) : 100=0.40%, 2000=38.89%, >=2000=60.71% 00:25:12.799 cpu : usr=0.00%, sys=0.97%, ctx=510, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:12.799 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903607: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=101, BW=101MiB/s (106MB/s)(1023MiB/10097msec) 00:25:12.799 slat (usec): min=59, max=92174, avg=9786.52, stdev=10208.76 00:25:12.799 clat (msec): min=75, max=1945, avg=1184.90, stdev=333.55 00:25:12.799 lat (msec): min=98, max=1955, avg=1194.68, stdev=334.37 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 178], 5.00th=[ 600], 10.00th=[ 911], 20.00th=[ 936], 00:25:12.799 | 30.00th=[ 1028], 40.00th=[ 1099], 50.00th=[ 1200], 60.00th=[ 1284], 00:25:12.799 | 70.00th=[ 1318], 80.00th=[ 1351], 90.00th=[ 1653], 95.00th=[ 1821], 00:25:12.799 | 99.00th=[ 1888], 99.50th=[ 1905], 99.90th=[ 1921], 99.95th=[ 1938], 00:25:12.799 | 99.99th=[ 1938] 00:25:12.799 bw ( KiB/s): min=65536, max=151552, per=3.03%, avg=101806.06, stdev=26597.27, samples=18 00:25:12.799 iops : min= 64, max= 148, avg=99.33, stdev=25.94, samples=18 00:25:12.799 lat (msec) : 100=0.20%, 250=1.56%, 500=2.25%, 750=2.25%, 1000=19.94% 00:25:12.799 lat (msec) : 2000=73.80% 00:25:12.799 cpu : usr=0.10%, sys=2.75%, ctx=1685, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.799 issued rwts: total=1023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903608: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=34, BW=34.5MiB/s (36.2MB/s)(359MiB/10403msec) 00:25:12.799 slat (usec): min=59, max=2080.2k, avg=28807.11, stdev=208521.27 00:25:12.799 clat (msec): min=58, max=9122, avg=3535.43, stdev=3779.27 00:25:12.799 lat (msec): min=618, max=9126, avg=3564.24, stdev=3783.81 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 617], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 625], 00:25:12.799 | 30.00th=[ 634], 40.00th=[ 634], 50.00th=[ 709], 60.00th=[ 936], 00:25:12.799 | 70.00th=[ 8288], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:25:12.799 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.799 | 99.99th=[ 9060] 00:25:12.799 bw ( KiB/s): min= 8175, max=202752, per=2.01%, avg=67581.57, stdev=76622.22, samples=7 00:25:12.799 iops : min= 7, max= 198, avg=65.86, stdev=74.95, samples=7 00:25:12.799 lat (msec) : 100=0.28%, 750=52.65%, 1000=8.08%, >=2000=39.00% 00:25:12.799 cpu : usr=0.03%, sys=1.37%, ctx=468, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.5% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:12.799 issued rwts: total=359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903609: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=38, BW=38.3MiB/s (40.2MB/s)(393MiB/10252msec) 00:25:12.799 slat (usec): min=31, max=2062.2k, avg=25875.82, stdev=205281.98 00:25:12.799 clat (msec): min=80, max=9018, avg=3199.57, stdev=3435.71 00:25:12.799 lat (msec): min=515, max=9020, avg=3225.44, stdev=3442.87 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 514], 5.00th=[ 518], 10.00th=[ 518], 20.00th=[ 523], 00:25:12.799 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 617], 60.00th=[ 2635], 00:25:12.799 | 70.00th=[ 4799], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:25:12.799 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.799 | 99.99th=[ 9060] 00:25:12.799 bw ( KiB/s): min=24576, max=204800, per=2.69%, avg=90440.33, stdev=85980.65, samples=6 00:25:12.799 iops : min= 24, max= 200, avg=88.17, stdev=84.08, samples=6 00:25:12.799 lat (msec) : 100=0.25%, 750=56.23%, >=2000=43.51% 00:25:12.799 cpu : usr=0.05%, sys=1.36%, ctx=329, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:12.799 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903610: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=35, BW=35.3MiB/s (37.0MB/s)(362MiB/10252msec) 00:25:12.799 slat (usec): min=47, max=2111.4k, avg=28121.61, stdev=217873.42 00:25:12.799 clat (msec): min=69, max=9016, avg=3460.74, stdev=3409.14 00:25:12.799 lat (msec): min=616, max=9017, avg=3488.86, stdev=3414.71 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 617], 5.00th=[ 617], 10.00th=[ 617], 20.00th=[ 617], 00:25:12.799 | 30.00th=[ 625], 40.00th=[ 625], 50.00th=[ 659], 60.00th=[ 4279], 00:25:12.799 | 70.00th=[ 4799], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:25:12.799 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.799 | 99.99th=[ 9060] 00:25:12.799 bw ( KiB/s): min= 6144, max=208896, per=2.38%, avg=79861.67, stdev=74291.63, samples=6 00:25:12.799 iops : min= 6, max= 204, avg=77.83, stdev=72.68, samples=6 00:25:12.799 lat (msec) : 100=0.28%, 750=52.76%, >=2000=46.96% 00:25:12.799 cpu : usr=0.03%, sys=1.28%, ctx=301, majf=0, minf=32769 00:25:12.799 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:25:12.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.799 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:12.799 issued rwts: total=362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.799 job3: (groupid=0, jobs=1): err= 0: pid=2903611: Sun Dec 15 16:12:39 2024 00:25:12.799 read: IOPS=58, BW=58.4MiB/s (61.3MB/s)(607MiB/10387msec) 00:25:12.799 slat (usec): min=44, max=2076.5k, avg=16986.42, stdev=155490.58 00:25:12.799 clat (msec): min=73, max=6427, avg=1317.69, stdev=1562.32 00:25:12.799 lat (msec): min=480, max=6432, avg=1334.67, stdev=1575.06 00:25:12.799 clat percentiles (msec): 00:25:12.799 | 1.00th=[ 481], 5.00th=[ 485], 10.00th=[ 485], 20.00th=[ 489], 00:25:12.800 | 30.00th=[ 493], 40.00th=[ 498], 50.00th=[ 498], 60.00th=[ 506], 00:25:12.800 | 70.00th=[ 575], 80.00th=[ 2333], 90.00th=[ 2567], 95.00th=[ 6409], 00:25:12.800 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:25:12.800 | 99.99th=[ 6409] 00:25:12.800 bw ( KiB/s): min=53248, max=268288, per=5.84%, avg=196198.40, stdev=96900.92, samples=5 00:25:12.800 iops : min= 52, max= 262, avg=191.60, stdev=94.63, samples=5 00:25:12.800 lat (msec) : 100=0.16%, 500=53.54%, 750=16.97%, >=2000=29.32% 00:25:12.800 cpu : usr=0.03%, sys=1.42%, ctx=615, majf=0, minf=32769 00:25:12.800 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.800 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903612: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=3, BW=3657KiB/s (3745kB/s)(37.0MiB/10361msec) 00:25:12.800 slat (usec): min=802, max=2143.5k, avg=278140.86, stdev=687494.92 00:25:12.800 clat (msec): min=69, max=10355, avg=8445.89, stdev=3019.01 00:25:12.800 lat (msec): min=2140, max=10360, avg=8724.03, stdev=2680.96 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 69], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 6409], 00:25:12.800 | 30.00th=[ 8658], 40.00th=[10134], 50.00th=[10268], 60.00th=[10268], 00:25:12.800 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:25:12.800 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.800 | 99.99th=[10402] 00:25:12.800 lat (msec) : 100=2.70%, >=2000=97.30% 00:25:12.800 cpu : usr=0.00%, sys=0.38%, ctx=101, majf=0, minf=9473 00:25:12.800 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.800 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903613: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=20, BW=21.0MiB/s (22.0MB/s)(216MiB/10305msec) 00:25:12.800 slat (usec): min=65, max=2124.5k, avg=46306.66, stdev=263962.20 00:25:12.800 clat (msec): min=300, max=9285, avg=2095.99, stdev=2695.50 00:25:12.800 lat (msec): min=312, max=9296, avg=2142.30, stdev=2740.64 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 443], 20.00th=[ 667], 00:25:12.800 | 30.00th=[ 885], 40.00th=[ 1083], 50.00th=[ 1217], 60.00th=[ 1267], 00:25:12.800 | 70.00th=[ 1284], 80.00th=[ 1418], 90.00th=[ 9060], 95.00th=[ 9194], 00:25:12.800 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:12.800 | 99.99th=[ 9329] 00:25:12.800 bw ( KiB/s): min=67584, max=112881, per=2.68%, avg=90232.50, stdev=32029.82, samples=2 00:25:12.800 iops : min= 66, max= 110, avg=88.00, stdev=31.11, samples=2 00:25:12.800 lat (msec) : 500=12.04%, 750=12.04%, 1000=12.04%, 2000=47.22%, >=2000=16.67% 00:25:12.800 cpu : usr=0.01%, sys=1.03%, ctx=427, majf=0, minf=32769 00:25:12.800 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:25:12.800 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903614: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=5, BW=5915KiB/s (6056kB/s)(60.0MiB/10388msec) 00:25:12.800 slat (usec): min=780, max=2087.7k, avg=172002.87, stdev=536950.27 00:25:12.800 clat (msec): min=67, max=10381, avg=8392.19, stdev=2942.65 00:25:12.800 lat (msec): min=2128, max=10387, avg=8564.19, stdev=2742.60 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 68], 5.00th=[ 2140], 10.00th=[ 4111], 20.00th=[ 4279], 00:25:12.800 | 30.00th=[ 8557], 40.00th=[10134], 50.00th=[10268], 60.00th=[10268], 00:25:12.800 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:25:12.800 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.800 | 99.99th=[10402] 00:25:12.800 lat (msec) : 100=1.67%, >=2000=98.33% 00:25:12.800 cpu : usr=0.00%, sys=0.65%, ctx=149, majf=0, minf=15361 00:25:12.800 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.800 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903615: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=39, BW=39.4MiB/s (41.3MB/s)(411MiB/10423msec) 00:25:12.800 slat (usec): min=653, max=2093.3k, avg=25186.36, stdev=162509.30 00:25:12.800 clat (msec): min=67, max=8541, avg=2726.40, stdev=1762.77 00:25:12.800 lat (msec): min=1045, max=8621, avg=2751.59, stdev=1766.27 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 1045], 5.00th=[ 1053], 10.00th=[ 1070], 20.00th=[ 1167], 00:25:12.800 | 30.00th=[ 1301], 40.00th=[ 1586], 50.00th=[ 1821], 60.00th=[ 2836], 00:25:12.800 | 70.00th=[ 4329], 80.00th=[ 4933], 90.00th=[ 5470], 95.00th=[ 5805], 00:25:12.800 | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 8557], 99.95th=[ 8557], 00:25:12.800 | 99.99th=[ 8557] 00:25:12.800 bw ( KiB/s): min= 6144, max=121074, per=1.92%, avg=64470.11, stdev=39002.31, samples=9 00:25:12.800 iops : min= 6, max= 118, avg=62.89, stdev=38.04, samples=9 00:25:12.800 lat (msec) : 100=0.24%, 2000=57.18%, >=2000=42.58% 00:25:12.800 cpu : usr=0.00%, sys=1.86%, ctx=825, majf=0, minf=32769 00:25:12.800 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:12.800 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903616: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=3, BW=3458KiB/s (3541kB/s)(35.0MiB/10363msec) 00:25:12.800 slat (usec): min=998, max=2102.5k, avg=293965.48, stdev=698147.57 00:25:12.800 clat (msec): min=73, max=10359, avg=7868.47, stdev=3226.65 00:25:12.800 lat (msec): min=2098, max=10362, avg=8162.44, stdev=2952.62 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 73], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4279], 00:25:12.800 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10134], 60.00th=[10134], 00:25:12.800 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10402], 00:25:12.800 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.800 | 99.99th=[10402] 00:25:12.800 lat (msec) : 100=2.86%, >=2000=97.14% 00:25:12.800 cpu : usr=0.00%, sys=0.27%, ctx=98, majf=0, minf=8961 00:25:12.800 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.800 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job3: (groupid=0, jobs=1): err= 0: pid=2903617: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=2, BW=2803KiB/s (2871kB/s)(28.0MiB/10228msec) 00:25:12.800 slat (msec): min=5, max=2103, avg=362.37, stdev=766.45 00:25:12.800 clat (msec): min=80, max=10192, avg=6477.08, stdev=3146.93 00:25:12.800 lat (msec): min=2113, max=10227, avg=6839.46, stdev=2961.84 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 81], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:25:12.800 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:25:12.800 | 70.00th=[ 8658], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:25:12.800 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:25:12.800 | 99.99th=[10134] 00:25:12.800 lat (msec) : 100=3.57%, >=2000=96.43% 00:25:12.800 cpu : usr=0.00%, sys=0.22%, ctx=73, majf=0, minf=7169 00:25:12.800 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.800 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job4: (groupid=0, jobs=1): err= 0: pid=2903618: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=5, BW=5683KiB/s (5820kB/s)(57.0MiB/10270msec) 00:25:12.800 slat (usec): min=697, max=2113.3k, avg=178769.81, stdev=552729.85 00:25:12.800 clat (msec): min=79, max=8599, avg=4143.19, stdev=1241.90 00:25:12.800 lat (msec): min=2127, max=10269, avg=4321.95, stdev=1373.00 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 80], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4010], 00:25:12.800 | 30.00th=[ 4077], 40.00th=[ 4111], 50.00th=[ 4144], 60.00th=[ 4178], 00:25:12.800 | 70.00th=[ 4212], 80.00th=[ 4245], 90.00th=[ 4329], 95.00th=[ 6477], 00:25:12.800 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:25:12.800 | 99.99th=[ 8658] 00:25:12.800 lat (msec) : 100=1.75%, >=2000=98.25% 00:25:12.800 cpu : usr=0.00%, sys=0.41%, ctx=123, majf=0, minf=14593 00:25:12.800 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:12.800 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.800 job4: (groupid=0, jobs=1): err= 0: pid=2903619: Sun Dec 15 16:12:39 2024 00:25:12.800 read: IOPS=0, BW=995KiB/s (1019kB/s)(10.0MiB/10288msec) 00:25:12.800 slat (msec): min=25, max=2113, avg=1020.48, stdev=1044.72 00:25:12.800 clat (msec): min=82, max=8611, avg=4753.17, stdev=2804.85 00:25:12.800 lat (msec): min=2152, max=10287, avg=5773.65, stdev=2772.97 00:25:12.800 clat percentiles (msec): 00:25:12.800 | 1.00th=[ 83], 5.00th=[ 83], 10.00th=[ 83], 20.00th=[ 2165], 00:25:12.800 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 4329], 00:25:12.800 | 70.00th=[ 6409], 80.00th=[ 6477], 90.00th=[ 8557], 95.00th=[ 8658], 00:25:12.800 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:25:12.800 | 99.99th=[ 8658] 00:25:12.800 lat (msec) : 100=10.00%, >=2000=90.00% 00:25:12.800 cpu : usr=0.00%, sys=0.07%, ctx=57, majf=0, minf=2561 00:25:12.800 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.800 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903620: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=54, BW=54.7MiB/s (57.3MB/s)(561MiB/10260msec) 00:25:12.801 slat (usec): min=61, max=2063.3k, avg=18130.72, stdev=161713.99 00:25:12.801 clat (msec): min=83, max=6314, avg=1026.30, stdev=978.00 00:25:12.801 lat (msec): min=482, max=6388, avg=1044.43, stdev=1003.94 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 481], 5.00th=[ 485], 10.00th=[ 485], 20.00th=[ 485], 00:25:12.801 | 30.00th=[ 489], 40.00th=[ 489], 50.00th=[ 489], 60.00th=[ 502], 00:25:12.801 | 70.00th=[ 506], 80.00th=[ 2333], 90.00th=[ 2534], 95.00th=[ 2635], 00:25:12.801 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 6342], 99.95th=[ 6342], 00:25:12.801 | 99.99th=[ 6342] 00:25:12.801 bw ( KiB/s): min=22528, max=270336, per=5.28%, avg=177356.80, stdev=122688.70, samples=5 00:25:12.801 iops : min= 22, max= 264, avg=173.20, stdev=119.81, samples=5 00:25:12.801 lat (msec) : 100=0.18%, 500=57.22%, 750=17.29%, >=2000=25.31% 00:25:12.801 cpu : usr=0.07%, sys=1.64%, ctx=506, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.801 issued rwts: total=561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903621: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(168MiB/10291msec) 00:25:12.801 slat (usec): min=524, max=2135.7k, avg=60747.01, stdev=316538.01 00:25:12.801 clat (msec): min=84, max=9893, avg=7309.69, stdev=3305.12 00:25:12.801 lat (msec): min=1299, max=9911, avg=7370.44, stdev=3259.45 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 1284], 5.00th=[ 1301], 10.00th=[ 1351], 20.00th=[ 2106], 00:25:12.801 | 30.00th=[ 8658], 40.00th=[ 8792], 50.00th=[ 9060], 60.00th=[ 9194], 00:25:12.801 | 70.00th=[ 9463], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9866], 00:25:12.801 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:25:12.801 | 99.99th=[ 9866] 00:25:12.801 bw ( KiB/s): min= 2048, max=34816, per=0.41%, avg=13653.33, stdev=14919.04, samples=6 00:25:12.801 iops : min= 2, max= 34, avg=13.33, stdev=14.57, samples=6 00:25:12.801 lat (msec) : 100=0.60%, 2000=19.05%, >=2000=80.36% 00:25:12.801 cpu : usr=0.00%, sys=0.97%, ctx=359, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.5%, 32=19.0%, >=64=62.5% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:25:12.801 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903622: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=2, BW=2283KiB/s (2338kB/s)(23.0MiB/10317msec) 00:25:12.801 slat (usec): min=1268, max=2099.0k, avg=444924.15, stdev=827093.89 00:25:12.801 clat (msec): min=83, max=10310, avg=4821.05, stdev=3040.32 00:25:12.801 lat (msec): min=2092, max=10316, avg=5265.97, stdev=3064.13 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 84], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 2123], 00:25:12.801 | 30.00th=[ 2123], 40.00th=[ 2232], 50.00th=[ 4329], 60.00th=[ 6409], 00:25:12.801 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10268], 00:25:12.801 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.801 | 99.99th=[10268] 00:25:12.801 lat (msec) : 100=4.35%, >=2000=95.65% 00:25:12.801 cpu : usr=0.00%, sys=0.14%, ctx=72, majf=0, minf=5889 00:25:12.801 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:12.801 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903623: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=18, BW=18.0MiB/s (18.9MB/s)(186MiB/10309msec) 00:25:12.801 slat (usec): min=565, max=2107.2k, avg=54992.52, stdev=283670.95 00:25:12.801 clat (msec): min=78, max=9305, avg=2505.79, stdev=2769.18 00:25:12.801 lat (msec): min=433, max=9308, avg=2560.78, stdev=2811.99 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 435], 5.00th=[ 592], 10.00th=[ 651], 20.00th=[ 902], 00:25:12.801 | 30.00th=[ 1116], 40.00th=[ 1318], 50.00th=[ 1385], 60.00th=[ 1469], 00:25:12.801 | 70.00th=[ 1653], 80.00th=[ 2198], 90.00th=[ 9194], 95.00th=[ 9194], 00:25:12.801 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:12.801 | 99.99th=[ 9329] 00:25:12.801 bw ( KiB/s): min=57217, max=59392, per=1.73%, avg=58304.50, stdev=1537.96, samples=2 00:25:12.801 iops : min= 55, max= 58, avg=56.50, stdev= 2.12, samples=2 00:25:12.801 lat (msec) : 100=0.54%, 500=2.69%, 750=11.29%, 1000=10.75%, 2000=53.23% 00:25:12.801 lat (msec) : >=2000=21.51% 00:25:12.801 cpu : usr=0.00%, sys=1.11%, ctx=622, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:25:12.801 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903624: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=87, BW=87.2MiB/s (91.4MB/s)(876MiB/10050msec) 00:25:12.801 slat (usec): min=56, max=2000.6k, avg=11416.88, stdev=86729.81 00:25:12.801 clat (msec): min=45, max=4668, avg=983.99, stdev=615.80 00:25:12.801 lat (msec): min=54, max=4685, avg=995.41, stdev=627.66 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 89], 5.00th=[ 321], 10.00th=[ 684], 20.00th=[ 877], 00:25:12.801 | 30.00th=[ 911], 40.00th=[ 919], 50.00th=[ 927], 60.00th=[ 936], 00:25:12.801 | 70.00th=[ 944], 80.00th=[ 969], 90.00th=[ 978], 95.00th=[ 1003], 00:25:12.801 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:12.801 | 99.99th=[ 4665] 00:25:12.801 bw ( KiB/s): min=126976, max=153600, per=4.14%, avg=139230.91, stdev=10402.12, samples=11 00:25:12.801 iops : min= 124, max= 150, avg=135.82, stdev=10.21, samples=11 00:25:12.801 lat (msec) : 50=0.11%, 100=1.03%, 250=2.63%, 500=3.31%, 750=3.77% 00:25:12.801 lat (msec) : 1000=84.25%, 2000=0.68%, >=2000=4.22% 00:25:12.801 cpu : usr=0.02%, sys=1.37%, ctx=922, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.801 issued rwts: total=876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903625: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=78, BW=78.3MiB/s (82.1MB/s)(786MiB/10042msec) 00:25:12.801 slat (usec): min=74, max=2110.7k, avg=12724.04, stdev=114587.29 00:25:12.801 clat (msec): min=37, max=5380, avg=1530.33, stdev=1695.29 00:25:12.801 lat (msec): min=42, max=5388, avg=1543.05, stdev=1700.47 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 70], 5.00th=[ 176], 10.00th=[ 342], 20.00th=[ 481], 00:25:12.801 | 30.00th=[ 493], 40.00th=[ 523], 50.00th=[ 567], 60.00th=[ 1133], 00:25:12.801 | 70.00th=[ 1888], 80.00th=[ 1955], 90.00th=[ 5134], 95.00th=[ 5336], 00:25:12.801 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:25:12.801 | 99.99th=[ 5403] 00:25:12.801 bw ( KiB/s): min= 8175, max=266240, per=4.46%, avg=149899.00, stdev=95968.88, samples=9 00:25:12.801 iops : min= 7, max= 260, avg=146.22, stdev=93.83, samples=9 00:25:12.801 lat (msec) : 50=0.38%, 100=1.53%, 250=5.34%, 500=26.46%, 750=24.55% 00:25:12.801 lat (msec) : 2000=25.06%, >=2000=16.67% 00:25:12.801 cpu : usr=0.02%, sys=1.68%, ctx=1338, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.801 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903626: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=115, BW=116MiB/s (122MB/s)(1163MiB/10026msec) 00:25:12.801 slat (usec): min=41, max=2070.5k, avg=8592.86, stdev=96101.42 00:25:12.801 clat (msec): min=23, max=6461, avg=640.38, stdev=1055.35 00:25:12.801 lat (msec): min=25, max=6463, avg=648.98, stdev=1069.10 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 47], 5.00th=[ 153], 10.00th=[ 317], 20.00th=[ 363], 00:25:12.801 | 30.00th=[ 368], 40.00th=[ 372], 50.00th=[ 426], 60.00th=[ 485], 00:25:12.801 | 70.00th=[ 489], 80.00th=[ 506], 90.00th=[ 609], 95.00th=[ 709], 00:25:12.801 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:25:12.801 | 99.99th=[ 6477] 00:25:12.801 bw ( KiB/s): min=40960, max=374784, per=7.89%, avg=265166.88, stdev=109138.25, samples=8 00:25:12.801 iops : min= 40, max= 366, avg=258.88, stdev=106.64, samples=8 00:25:12.801 lat (msec) : 50=1.38%, 100=2.15%, 250=4.56%, 500=68.87%, 750=18.31% 00:25:12.801 lat (msec) : >=2000=4.73% 00:25:12.801 cpu : usr=0.10%, sys=2.22%, ctx=1085, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:25:12.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.801 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.801 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.801 job4: (groupid=0, jobs=1): err= 0: pid=2903627: Sun Dec 15 16:12:39 2024 00:25:12.801 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(188MiB/10023msec) 00:25:12.801 slat (usec): min=66, max=3595.8k, avg=53190.26, stdev=324750.10 00:25:12.801 clat (msec): min=22, max=9410, avg=2442.95, stdev=2526.93 00:25:12.801 lat (msec): min=22, max=9425, avg=2496.14, stdev=2574.20 00:25:12.801 clat percentiles (msec): 00:25:12.801 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 430], 00:25:12.801 | 30.00th=[ 701], 40.00th=[ 1099], 50.00th=[ 1536], 60.00th=[ 1888], 00:25:12.801 | 70.00th=[ 3708], 80.00th=[ 3775], 90.00th=[ 5671], 95.00th=[ 9329], 00:25:12.801 | 99.00th=[ 9329], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:25:12.801 | 99.99th=[ 9463] 00:25:12.801 bw ( KiB/s): min=28672, max=96063, per=1.86%, avg=62367.50, stdev=47652.63, samples=2 00:25:12.801 iops : min= 28, max= 93, avg=60.50, stdev=45.96, samples=2 00:25:12.801 lat (msec) : 50=7.45%, 100=4.26%, 250=4.79%, 500=7.98%, 750=6.91% 00:25:12.801 lat (msec) : 1000=7.45%, 2000=21.81%, >=2000=39.36% 00:25:12.801 cpu : usr=0.01%, sys=0.93%, ctx=579, majf=0, minf=32769 00:25:12.801 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:25:12.802 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job4: (groupid=0, jobs=1): err= 0: pid=2903628: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=81, BW=81.4MiB/s (85.3MB/s)(835MiB/10263msec) 00:25:12.802 slat (usec): min=39, max=2072.4k, avg=12187.29, stdev=134988.02 00:25:12.802 clat (msec): min=83, max=6464, avg=642.93, stdev=820.36 00:25:12.802 lat (msec): min=244, max=6537, avg=655.11, stdev=846.77 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 249], 00:25:12.802 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 259], 00:25:12.802 | 70.00th=[ 384], 80.00th=[ 531], 90.00th=[ 2333], 95.00th=[ 2400], 00:25:12.802 | 99.00th=[ 2467], 99.50th=[ 2802], 99.90th=[ 6477], 99.95th=[ 6477], 00:25:12.802 | 99.99th=[ 6477] 00:25:12.802 bw ( KiB/s): min=38912, max=522240, per=10.77%, avg=361984.00, stdev=227382.58, samples=4 00:25:12.802 iops : min= 38, max= 510, avg=353.50, stdev=222.05, samples=4 00:25:12.802 lat (msec) : 100=0.12%, 250=33.77%, 500=42.99%, 750=6.95%, >=2000=16.17% 00:25:12.802 cpu : usr=0.06%, sys=1.12%, ctx=1058, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.802 issued rwts: total=835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job4: (groupid=0, jobs=1): err= 0: pid=2903629: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=6, BW=6488KiB/s (6644kB/s)(65.0MiB/10259msec) 00:25:12.802 slat (usec): min=918, max=2081.6k, avg=156240.81, stdev=524007.68 00:25:12.802 clat (msec): min=102, max=10257, avg=6051.63, stdev=3084.39 00:25:12.802 lat (msec): min=2129, max=10258, avg=6207.87, stdev=3035.15 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 103], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2198], 00:25:12.802 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:25:12.802 | 70.00th=[ 8658], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:25:12.802 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:25:12.802 | 99.99th=[10268] 00:25:12.802 lat (msec) : 250=1.54%, >=2000=98.46% 00:25:12.802 cpu : usr=0.00%, sys=0.58%, ctx=64, majf=0, minf=16641 00:25:12.802 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:12.802 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job4: (groupid=0, jobs=1): err= 0: pid=2903630: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=13, BW=14.0MiB/s (14.6MB/s)(146MiB/10465msec) 00:25:12.802 slat (usec): min=102, max=2137.6k, avg=71101.42, stdev=354901.90 00:25:12.802 clat (msec): min=82, max=10422, avg=8847.49, stdev=2831.21 00:25:12.802 lat (msec): min=662, max=10423, avg=8918.59, stdev=2737.71 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 659], 5.00th=[ 676], 10.00th=[ 4010], 20.00th=[ 9731], 00:25:12.802 | 30.00th=[ 9866], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10134], 00:25:12.802 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:25:12.802 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:12.802 | 99.99th=[10402] 00:25:12.802 bw ( KiB/s): min= 2043, max=18432, per=0.18%, avg=6143.17, stdev=6476.98, samples=6 00:25:12.802 iops : min= 1, max= 18, avg= 5.83, stdev= 6.46, samples=6 00:25:12.802 lat (msec) : 100=0.68%, 750=6.16%, 2000=0.68%, >=2000=92.47% 00:25:12.802 cpu : usr=0.00%, sys=1.42%, ctx=154, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.5%, 16=11.0%, 32=21.9%, >=64=56.8% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=95.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.0% 00:25:12.802 issued rwts: total=146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job5: (groupid=0, jobs=1): err= 0: pid=2903631: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=119, BW=120MiB/s (126MB/s)(1200MiB/10011msec) 00:25:12.802 slat (usec): min=48, max=2047.4k, avg=8327.39, stdev=74797.58 00:25:12.802 clat (msec): min=10, max=4189, avg=660.34, stdev=354.85 00:25:12.802 lat (msec): min=10, max=4197, avg=668.67, stdev=370.25 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 18], 5.00th=[ 105], 10.00th=[ 418], 20.00th=[ 481], 00:25:12.802 | 30.00th=[ 485], 40.00th=[ 489], 50.00th=[ 609], 60.00th=[ 718], 00:25:12.802 | 70.00th=[ 785], 80.00th=[ 961], 90.00th=[ 995], 95.00th=[ 1045], 00:25:12.802 | 99.00th=[ 1099], 99.50th=[ 2567], 99.90th=[ 4178], 99.95th=[ 4178], 00:25:12.802 | 99.99th=[ 4178] 00:25:12.802 bw ( KiB/s): min=110592, max=268288, per=5.22%, avg=175301.09, stdev=55475.16, samples=11 00:25:12.802 iops : min= 108, max= 262, avg=171.09, stdev=54.09, samples=11 00:25:12.802 lat (msec) : 20=1.08%, 50=2.08%, 100=1.75%, 250=2.42%, 500=38.75% 00:25:12.802 lat (msec) : 750=21.75%, 1000=23.17%, 2000=8.25%, >=2000=0.75% 00:25:12.802 cpu : usr=0.10%, sys=1.90%, ctx=2227, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.8% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.802 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job5: (groupid=0, jobs=1): err= 0: pid=2903632: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=19, BW=19.2MiB/s (20.1MB/s)(200MiB/10422msec) 00:25:12.802 slat (usec): min=176, max=2073.5k, avg=50081.58, stdev=266217.73 00:25:12.802 clat (msec): min=403, max=9300, avg=3678.59, stdev=3483.66 00:25:12.802 lat (msec): min=434, max=9312, avg=3728.68, stdev=3500.43 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 435], 5.00th=[ 592], 10.00th=[ 785], 20.00th=[ 1011], 00:25:12.802 | 30.00th=[ 1267], 40.00th=[ 1502], 50.00th=[ 1653], 60.00th=[ 1888], 00:25:12.802 | 70.00th=[ 5738], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9194], 00:25:12.802 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:12.802 | 99.99th=[ 9329] 00:25:12.802 bw ( KiB/s): min=61440, max=86016, per=2.19%, avg=73728.00, stdev=17377.86, samples=2 00:25:12.802 iops : min= 60, max= 84, avg=72.00, stdev=16.97, samples=2 00:25:12.802 lat (msec) : 500=2.50%, 750=5.50%, 1000=11.50%, 2000=45.50%, >=2000=35.00% 00:25:12.802 cpu : usr=0.02%, sys=1.33%, ctx=436, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:25:12.802 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job5: (groupid=0, jobs=1): err= 0: pid=2903633: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=33, BW=33.1MiB/s (34.7MB/s)(346MiB/10460msec) 00:25:12.802 slat (usec): min=52, max=2072.2k, avg=29916.86, stdev=203846.92 00:25:12.802 clat (msec): min=105, max=5172, avg=2942.48, stdev=1824.39 00:25:12.802 lat (msec): min=900, max=5183, avg=2972.39, stdev=1818.46 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 902], 5.00th=[ 911], 10.00th=[ 911], 20.00th=[ 919], 00:25:12.802 | 30.00th=[ 953], 40.00th=[ 1045], 50.00th=[ 3138], 60.00th=[ 4530], 00:25:12.802 | 70.00th=[ 4665], 80.00th=[ 4732], 90.00th=[ 4933], 95.00th=[ 5067], 00:25:12.802 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:25:12.802 | 99.99th=[ 5201] 00:25:12.802 bw ( KiB/s): min=20439, max=145408, per=2.66%, avg=89284.60, stdev=59502.64, samples=5 00:25:12.802 iops : min= 19, max= 142, avg=87.00, stdev=58.39, samples=5 00:25:12.802 lat (msec) : 250=0.29%, 1000=37.57%, 2000=4.62%, >=2000=57.51% 00:25:12.802 cpu : usr=0.01%, sys=1.47%, ctx=377, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.8% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:12.802 issued rwts: total=346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job5: (groupid=0, jobs=1): err= 0: pid=2903634: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=168, BW=169MiB/s (177MB/s)(1693MiB/10031msec) 00:25:12.802 slat (usec): min=43, max=2054.5k, avg=5901.38, stdev=50271.43 00:25:12.802 clat (msec): min=29, max=2575, avg=733.14, stdev=561.73 00:25:12.802 lat (msec): min=33, max=2577, avg=739.04, stdev=563.49 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 93], 5.00th=[ 342], 10.00th=[ 363], 20.00th=[ 368], 00:25:12.802 | 30.00th=[ 388], 40.00th=[ 493], 50.00th=[ 502], 60.00th=[ 676], 00:25:12.802 | 70.00th=[ 768], 80.00th=[ 978], 90.00th=[ 1062], 95.00th=[ 2500], 00:25:12.802 | 99.00th=[ 2534], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:25:12.802 | 99.99th=[ 2567] 00:25:12.802 bw ( KiB/s): min=26624, max=354304, per=5.61%, avg=188572.94, stdev=93021.10, samples=17 00:25:12.802 iops : min= 26, max= 346, avg=184.06, stdev=90.81, samples=17 00:25:12.802 lat (msec) : 50=0.35%, 100=0.71%, 250=1.42%, 500=47.02%, 750=18.02% 00:25:12.802 lat (msec) : 1000=15.53%, 2000=9.45%, >=2000=7.50% 00:25:12.802 cpu : usr=0.12%, sys=2.96%, ctx=2555, majf=0, minf=32769 00:25:12.802 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:25:12.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.802 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.802 issued rwts: total=1693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.802 job5: (groupid=0, jobs=1): err= 0: pid=2903635: Sun Dec 15 16:12:39 2024 00:25:12.802 read: IOPS=293, BW=294MiB/s (308MB/s)(2941MiB/10011msec) 00:25:12.802 slat (usec): min=34, max=2055.0k, avg=3396.20, stdev=60863.23 00:25:12.802 clat (msec): min=9, max=3995, avg=272.24, stdev=486.61 00:25:12.802 lat (msec): min=10, max=4016, avg=275.64, stdev=493.13 00:25:12.802 clat percentiles (msec): 00:25:12.802 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 125], 00:25:12.802 | 30.00th=[ 126], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 129], 00:25:12.802 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 326], 00:25:12.802 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 3977], 99.95th=[ 3977], 00:25:12.802 | 99.99th=[ 4010] 00:25:12.802 bw ( KiB/s): min=421888, max=1042395, per=22.63%, avg=760676.86, stdev=274533.18, samples=7 00:25:12.803 iops : min= 412, max= 1017, avg=742.57, stdev=267.77, samples=7 00:25:12.803 lat (msec) : 10=0.03%, 20=0.24%, 50=0.37%, 100=0.17%, 250=81.88% 00:25:12.803 lat (msec) : 500=12.65%, >=2000=4.66% 00:25:12.803 cpu : usr=0.13%, sys=2.47%, ctx=2865, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.803 issued rwts: total=2941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903636: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=130, BW=130MiB/s (137MB/s)(1344MiB/10308msec) 00:25:12.803 slat (usec): min=42, max=2081.5k, avg=7598.00, stdev=87813.64 00:25:12.803 clat (msec): min=90, max=4519, avg=920.80, stdev=1208.84 00:25:12.803 lat (msec): min=244, max=4523, avg=928.40, stdev=1213.29 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 249], 00:25:12.803 | 30.00th=[ 251], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:25:12.803 | 70.00th=[ 1083], 80.00th=[ 1234], 90.00th=[ 2433], 95.00th=[ 4077], 00:25:12.803 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:12.803 | 99.99th=[ 4530] 00:25:12.803 bw ( KiB/s): min=12288, max=521197, per=7.41%, avg=248932.50, stdev=227776.27, samples=10 00:25:12.803 iops : min= 12, max= 508, avg=243.00, stdev=222.31, samples=10 00:25:12.803 lat (msec) : 100=0.07%, 250=31.18%, 500=37.65%, 2000=11.76%, >=2000=19.35% 00:25:12.803 cpu : usr=0.07%, sys=1.81%, ctx=1538, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.803 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903637: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=29, BW=29.7MiB/s (31.1MB/s)(310MiB/10451msec) 00:25:12.803 slat (usec): min=422, max=2085.8k, avg=33383.17, stdev=214824.04 00:25:12.803 clat (msec): min=100, max=6286, avg=3969.85, stdev=1846.56 00:25:12.803 lat (msec): min=532, max=6289, avg=4003.23, stdev=1825.78 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 535], 5.00th=[ 550], 10.00th=[ 1552], 20.00th=[ 2702], 00:25:12.803 | 30.00th=[ 3037], 40.00th=[ 3239], 50.00th=[ 3641], 60.00th=[ 4111], 00:25:12.803 | 70.00th=[ 5873], 80.00th=[ 6007], 90.00th=[ 6141], 95.00th=[ 6208], 00:25:12.803 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:25:12.803 | 99.99th=[ 6275] 00:25:12.803 bw ( KiB/s): min= 4096, max=223232, per=1.85%, avg=62123.83, stdev=83075.35, samples=6 00:25:12.803 iops : min= 4, max= 218, avg=60.50, stdev=81.24, samples=6 00:25:12.803 lat (msec) : 250=0.32%, 750=9.35%, 2000=5.81%, >=2000=84.52% 00:25:12.803 cpu : usr=0.03%, sys=1.37%, ctx=700, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:12.803 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903638: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=124, BW=124MiB/s (130MB/s)(1300MiB/10466msec) 00:25:12.803 slat (usec): min=49, max=2165.7k, avg=7965.15, stdev=100313.21 00:25:12.803 clat (msec): min=106, max=5339, avg=985.14, stdev=1500.78 00:25:12.803 lat (msec): min=248, max=6436, avg=993.10, stdev=1509.85 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 251], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 253], 00:25:12.803 | 30.00th=[ 255], 40.00th=[ 255], 50.00th=[ 257], 60.00th=[ 262], 00:25:12.803 | 70.00th=[ 514], 80.00th=[ 961], 90.00th=[ 2467], 95.00th=[ 5201], 00:25:12.803 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:25:12.803 | 99.99th=[ 5336] 00:25:12.803 bw ( KiB/s): min= 2048, max=509952, per=7.14%, avg=239979.40, stdev=207247.01, samples=10 00:25:12.803 iops : min= 2, max= 498, avg=234.30, stdev=202.39, samples=10 00:25:12.803 lat (msec) : 250=1.38%, 500=66.38%, 750=11.08%, 1000=1.46%, >=2000=19.69% 00:25:12.803 cpu : usr=0.05%, sys=2.07%, ctx=1370, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.803 issued rwts: total=1300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903639: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(263MiB/10192msec) 00:25:12.803 slat (usec): min=625, max=2124.5k, avg=38019.15, stdev=230924.19 00:25:12.803 clat (msec): min=190, max=9036, avg=1462.14, stdev=1964.58 00:25:12.803 lat (msec): min=192, max=9053, avg=1500.16, stdev=2023.66 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 197], 5.00th=[ 226], 10.00th=[ 300], 20.00th=[ 456], 00:25:12.803 | 30.00th=[ 651], 40.00th=[ 869], 50.00th=[ 944], 60.00th=[ 1020], 00:25:12.803 | 70.00th=[ 1070], 80.00th=[ 1133], 90.00th=[ 5201], 95.00th=[ 5470], 00:25:12.803 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:12.803 | 99.99th=[ 9060] 00:25:12.803 bw ( KiB/s): min=106496, max=172032, per=4.14%, avg=139264.00, stdev=46340.95, samples=2 00:25:12.803 iops : min= 104, max= 168, avg=136.00, stdev=45.25, samples=2 00:25:12.803 lat (msec) : 250=6.46%, 500=15.59%, 750=13.31%, 1000=23.19%, 2000=28.14% 00:25:12.803 lat (msec) : >=2000=13.31% 00:25:12.803 cpu : usr=0.02%, sys=1.16%, ctx=454, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.2%, >=64=76.0% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:12.803 issued rwts: total=263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903640: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=44, BW=44.6MiB/s (46.7MB/s)(462MiB/10363msec) 00:25:12.803 slat (usec): min=53, max=2130.8k, avg=22246.07, stdev=166089.67 00:25:12.803 clat (msec): min=80, max=6727, avg=2606.28, stdev=1770.78 00:25:12.803 lat (msec): min=901, max=6745, avg=2628.53, stdev=1778.26 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 902], 5.00th=[ 902], 10.00th=[ 911], 20.00th=[ 911], 00:25:12.803 | 30.00th=[ 919], 40.00th=[ 1116], 50.00th=[ 2400], 60.00th=[ 2735], 00:25:12.803 | 70.00th=[ 3037], 80.00th=[ 5134], 90.00th=[ 5201], 95.00th=[ 5201], 00:25:12.803 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6745], 99.95th=[ 6745], 00:25:12.803 | 99.99th=[ 6745] 00:25:12.803 bw ( KiB/s): min=10219, max=145408, per=2.54%, avg=85468.12, stdev=55948.96, samples=8 00:25:12.803 iops : min= 9, max= 142, avg=83.25, stdev=54.74, samples=8 00:25:12.803 lat (msec) : 100=0.22%, 1000=34.42%, 2000=9.31%, >=2000=56.06% 00:25:12.803 cpu : usr=0.03%, sys=1.40%, ctx=610, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:12.803 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903641: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=61, BW=61.7MiB/s (64.7MB/s)(638MiB/10340msec) 00:25:12.803 slat (usec): min=54, max=2072.2k, avg=16039.67, stdev=134441.64 00:25:12.803 clat (msec): min=101, max=6823, avg=1933.34, stdev=2268.70 00:25:12.803 lat (msec): min=518, max=6825, avg=1949.38, stdev=2273.30 00:25:12.803 clat percentiles (msec): 00:25:12.803 | 1.00th=[ 518], 5.00th=[ 523], 10.00th=[ 527], 20.00th=[ 535], 00:25:12.803 | 30.00th=[ 550], 40.00th=[ 693], 50.00th=[ 1011], 60.00th=[ 1099], 00:25:12.803 | 70.00th=[ 1167], 80.00th=[ 2567], 90.00th=[ 6544], 95.00th=[ 6678], 00:25:12.803 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:25:12.803 | 99.99th=[ 6812] 00:25:12.803 bw ( KiB/s): min= 6144, max=239616, per=3.11%, avg=104415.60, stdev=94761.14, samples=10 00:25:12.803 iops : min= 6, max= 234, avg=101.80, stdev=92.60, samples=10 00:25:12.803 lat (msec) : 250=0.16%, 750=43.10%, 1000=6.43%, 2000=29.00%, >=2000=21.32% 00:25:12.803 cpu : usr=0.08%, sys=1.30%, ctx=851, majf=0, minf=32769 00:25:12.803 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:25:12.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.803 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:12.803 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.803 job5: (groupid=0, jobs=1): err= 0: pid=2903642: Sun Dec 15 16:12:39 2024 00:25:12.803 read: IOPS=87, BW=87.4MiB/s (91.6MB/s)(914MiB/10460msec) 00:25:12.803 slat (usec): min=40, max=2075.3k, avg=11325.88, stdev=127782.86 00:25:12.803 clat (msec): min=103, max=4623, avg=1073.18, stdev=1612.62 00:25:12.803 lat (msec): min=249, max=4628, avg=1084.50, stdev=1619.55 00:25:12.803 clat percentiles (msec): 00:25:12.804 | 1.00th=[ 251], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 253], 00:25:12.804 | 30.00th=[ 255], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 266], 00:25:12.804 | 70.00th=[ 376], 80.00th=[ 550], 90.00th=[ 4463], 95.00th=[ 4530], 00:25:12.804 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:12.804 | 99.99th=[ 4597] 00:25:12.804 bw ( KiB/s): min=10219, max=512000, per=7.98%, avg=268284.50, stdev=237547.84, samples=6 00:25:12.804 iops : min= 9, max= 500, avg=261.83, stdev=232.19, samples=6 00:25:12.804 lat (msec) : 250=0.77%, 500=78.99%, 750=0.66%, >=2000=19.58% 00:25:12.804 cpu : usr=0.07%, sys=1.60%, ctx=1042, majf=0, minf=32769 00:25:12.804 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:25:12.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.804 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.804 issued rwts: total=914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.804 job5: (groupid=0, jobs=1): err= 0: pid=2903643: Sun Dec 15 16:12:39 2024 00:25:12.804 read: IOPS=208, BW=208MiB/s (218MB/s)(2175MiB/10451msec) 00:25:12.804 slat (usec): min=41, max=2101.6k, avg=4759.47, stdev=76584.89 00:25:12.804 clat (msec): min=90, max=6637, avg=583.07, stdev=1493.39 00:25:12.804 lat (msec): min=123, max=6639, avg=587.83, stdev=1498.96 00:25:12.804 clat percentiles (msec): 00:25:12.804 | 1.00th=[ 124], 5.00th=[ 125], 10.00th=[ 125], 20.00th=[ 125], 00:25:12.804 | 30.00th=[ 126], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:25:12.804 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 1318], 95.00th=[ 6544], 00:25:12.804 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:12.804 | 99.99th=[ 6611] 00:25:12.804 bw ( KiB/s): min= 4096, max=1038336, per=13.86%, avg=465803.89, stdev=489807.57, samples=9 00:25:12.804 iops : min= 4, max= 1014, avg=454.78, stdev=478.44, samples=9 00:25:12.804 lat (msec) : 100=0.05%, 250=84.69%, 500=2.11%, 750=1.52%, 1000=0.74% 00:25:12.804 lat (msec) : 2000=4.64%, >=2000=6.25% 00:25:12.804 cpu : usr=0.06%, sys=2.78%, ctx=2240, majf=0, minf=32769 00:25:12.804 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:25:12.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.804 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.804 00:25:12.804 Run status group 0 (all jobs): 00:25:12.804 READ: bw=3282MiB/s (3441MB/s), 995KiB/s-477MiB/s (1019kB/s-500MB/s), io=40.1GiB (43.1GB), run=10011-12515msec 00:25:12.804 00:25:12.804 Disk stats (read/write): 00:25:12.804 nvme0n1: ios=64298/0, merge=0/0, ticks=7525986/0, in_queue=7525986, util=98.52% 00:25:12.804 nvme1n1: ios=39633/0, merge=0/0, ticks=7381067/0, in_queue=7381067, util=98.44% 00:25:12.804 nvme2n1: ios=35502/0, merge=0/0, ticks=6910411/0, in_queue=6910411, util=98.70% 00:25:12.804 nvme3n1: ios=37799/0, merge=0/0, ticks=6268622/0, in_queue=6268622, util=98.42% 00:25:12.804 nvme4n1: ios=40107/0, merge=0/0, ticks=6546307/0, in_queue=6546307, util=98.80% 00:25:12.804 nvme5n1: ios=109872/0, merge=0/0, ticks=7256171/0, in_queue=7256171, util=99.14% 00:25:12.804 16:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:12.804 16:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:12.804 16:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:12.804 16:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:12.804 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:12.804 16:12:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:13.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:13.372 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:13.373 16:12:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:14.310 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:14.310 16:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:15.247 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:15.247 16:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:16.185 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:16.185 16:12:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:17.122 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.122 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:17.382 rmmod nvme_rdma 00:25:17.382 rmmod nvme_fabrics 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@513 -- # '[' -n 2902234 ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # killprocess 2902234 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 2902234 ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 2902234 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2902234 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2902234' 00:25:17.382 killing process with pid 2902234 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 2902234 00:25:17.382 16:12:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 2902234 00:25:17.641 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:17.641 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:17.641 00:25:17.641 real 0m33.903s 00:25:17.641 user 1m57.063s 00:25:17.641 sys 0m17.178s 00:25:17.641 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.641 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:17.641 ************************************ 00:25:17.641 END TEST nvmf_srq_overwhelm 00:25:17.641 ************************************ 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:17.901 ************************************ 00:25:17.901 START TEST nvmf_shutdown 00:25:17.901 ************************************ 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:17.901 * Looking for test storage... 00:25:17.901 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:17.901 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:17.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.902 --rc genhtml_branch_coverage=1 00:25:17.902 --rc genhtml_function_coverage=1 00:25:17.902 --rc genhtml_legend=1 00:25:17.902 --rc geninfo_all_blocks=1 00:25:17.902 --rc geninfo_unexecuted_blocks=1 00:25:17.902 00:25:17.902 ' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:17.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.902 --rc genhtml_branch_coverage=1 00:25:17.902 --rc genhtml_function_coverage=1 00:25:17.902 --rc genhtml_legend=1 00:25:17.902 --rc geninfo_all_blocks=1 00:25:17.902 --rc geninfo_unexecuted_blocks=1 00:25:17.902 00:25:17.902 ' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:17.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.902 --rc genhtml_branch_coverage=1 00:25:17.902 --rc genhtml_function_coverage=1 00:25:17.902 --rc genhtml_legend=1 00:25:17.902 --rc geninfo_all_blocks=1 00:25:17.902 --rc geninfo_unexecuted_blocks=1 00:25:17.902 00:25:17.902 ' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:17.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.902 --rc genhtml_branch_coverage=1 00:25:17.902 --rc genhtml_function_coverage=1 00:25:17.902 --rc genhtml_legend=1 00:25:17.902 --rc geninfo_all_blocks=1 00:25:17.902 --rc geninfo_unexecuted_blocks=1 00:25:17.902 00:25:17.902 ' 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.902 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.162 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:18.162 ************************************ 00:25:18.162 START TEST nvmf_shutdown_tc1 00:25:18.162 ************************************ 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.162 16:12:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:26.291 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:26.291 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:26.291 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:26.291 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:26.292 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:26.292 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:26.292 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:26.292 altname enp217s0f0np0 00:25:26.292 altname ens818f0np0 00:25:26.292 inet 192.168.100.8/24 scope global mlx_0_0 00:25:26.292 valid_lft forever preferred_lft forever 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:26.292 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:26.292 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:26.292 altname enp217s0f1np1 00:25:26.292 altname ens818f1np1 00:25:26.292 inet 192.168.100.9/24 scope global mlx_0_1 00:25:26.292 valid_lft forever preferred_lft forever 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:26.292 192.168.100.9' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:26.292 192.168.100.9' 00:25:26.292 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # head -n 1 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:26.293 192.168.100.9' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # tail -n +2 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # head -n 1 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=2910202 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 2910202 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2910202 ']' 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 [2024-12-15 16:12:53.677762] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:26.293 [2024-12-15 16:12:53.677811] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.293 [2024-12-15 16:12:53.747806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.293 [2024-12-15 16:12:53.787448] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.293 [2024-12-15 16:12:53.787488] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.293 [2024-12-15 16:12:53.787498] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.293 [2024-12-15 16:12:53.787510] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.293 [2024-12-15 16:12:53.787517] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.293 [2024-12-15 16:12:53.787567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.293 [2024-12-15 16:12:53.787649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.293 [2024-12-15 16:12:53.787765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.293 [2024-12-15 16:12:53.787766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.293 16:12:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 [2024-12-15 16:12:53.981957] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f76140/0x1f7a630) succeed. 00:25:26.293 [2024-12-15 16:12:53.992638] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f77780/0x1fbbcd0) succeed. 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 Malloc1 00:25:26.293 [2024-12-15 16:12:54.220929] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:26.293 Malloc2 00:25:26.293 Malloc3 00:25:26.293 Malloc4 00:25:26.293 Malloc5 00:25:26.293 Malloc6 00:25:26.293 Malloc7 00:25:26.293 Malloc8 00:25:26.293 Malloc9 00:25:26.293 Malloc10 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2910292 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2910292 /var/tmp/bdevperf.sock 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2910292 ']' 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.293 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.293 { 00:25:26.293 "params": { 00:25:26.293 "name": "Nvme$subsystem", 00:25:26.293 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 [2024-12-15 16:12:54.707533] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:26.294 [2024-12-15 16:12:54.707586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:26.294 { 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme$subsystem", 00:25:26.294 "trtype": "$TEST_TRANSPORT", 00:25:26.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "$NVMF_PORT", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.294 "hdgst": ${hdgst:-false}, 00:25:26.294 "ddgst": ${ddgst:-false} 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 } 00:25:26.294 EOF 00:25:26.294 )") 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:25:26.294 16:12:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme1", 00:25:26.294 "trtype": "rdma", 00:25:26.294 "traddr": "192.168.100.8", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "4420", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:26.294 "hdgst": false, 00:25:26.294 "ddgst": false 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 },{ 00:25:26.294 "params": { 00:25:26.294 "name": "Nvme2", 00:25:26.294 "trtype": "rdma", 00:25:26.294 "traddr": "192.168.100.8", 00:25:26.294 "adrfam": "ipv4", 00:25:26.294 "trsvcid": "4420", 00:25:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:26.294 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:26.294 "hdgst": false, 00:25:26.294 "ddgst": false 00:25:26.294 }, 00:25:26.294 "method": "bdev_nvme_attach_controller" 00:25:26.294 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme3", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme4", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme5", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme6", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme7", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme8", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme9", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 },{ 00:25:26.295 "params": { 00:25:26.295 "name": "Nvme10", 00:25:26.295 "trtype": "rdma", 00:25:26.295 "traddr": "192.168.100.8", 00:25:26.295 "adrfam": "ipv4", 00:25:26.295 "trsvcid": "4420", 00:25:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:26.295 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:26.295 "hdgst": false, 00:25:26.295 "ddgst": false 00:25:26.295 }, 00:25:26.295 "method": "bdev_nvme_attach_controller" 00:25:26.295 }' 00:25:26.295 [2024-12-15 16:12:54.781518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.295 [2024-12-15 16:12:54.820366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2910292 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:27.231 16:12:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:28.166 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2910292 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2910202 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.166 { 00:25:28.166 "params": { 00:25:28.166 "name": "Nvme$subsystem", 00:25:28.166 "trtype": "$TEST_TRANSPORT", 00:25:28.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.166 "adrfam": "ipv4", 00:25:28.166 "trsvcid": "$NVMF_PORT", 00:25:28.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.166 "hdgst": ${hdgst:-false}, 00:25:28.166 "ddgst": ${ddgst:-false} 00:25:28.166 }, 00:25:28.166 "method": "bdev_nvme_attach_controller" 00:25:28.166 } 00:25:28.166 EOF 00:25:28.166 )") 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.166 { 00:25:28.166 "params": { 00:25:28.166 "name": "Nvme$subsystem", 00:25:28.166 "trtype": "$TEST_TRANSPORT", 00:25:28.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.166 "adrfam": "ipv4", 00:25:28.166 "trsvcid": "$NVMF_PORT", 00:25:28.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.166 "hdgst": ${hdgst:-false}, 00:25:28.166 "ddgst": ${ddgst:-false} 00:25:28.166 }, 00:25:28.166 "method": "bdev_nvme_attach_controller" 00:25:28.166 } 00:25:28.166 EOF 00:25:28.166 )") 00:25:28.166 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.167 { 00:25:28.167 "params": { 00:25:28.167 "name": "Nvme$subsystem", 00:25:28.167 "trtype": "$TEST_TRANSPORT", 00:25:28.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.167 "adrfam": "ipv4", 00:25:28.167 "trsvcid": "$NVMF_PORT", 00:25:28.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.167 "hdgst": ${hdgst:-false}, 00:25:28.167 "ddgst": ${ddgst:-false} 00:25:28.167 }, 00:25:28.167 "method": "bdev_nvme_attach_controller" 00:25:28.167 } 00:25:28.167 EOF 00:25:28.167 )") 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.167 { 00:25:28.167 "params": { 00:25:28.167 "name": "Nvme$subsystem", 00:25:28.167 "trtype": "$TEST_TRANSPORT", 00:25:28.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.167 "adrfam": "ipv4", 00:25:28.167 "trsvcid": "$NVMF_PORT", 00:25:28.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.167 "hdgst": ${hdgst:-false}, 00:25:28.167 "ddgst": ${ddgst:-false} 00:25:28.167 }, 00:25:28.167 "method": "bdev_nvme_attach_controller" 00:25:28.167 } 00:25:28.167 EOF 00:25:28.167 )") 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.167 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.167 { 00:25:28.167 "params": { 00:25:28.167 "name": "Nvme$subsystem", 00:25:28.167 "trtype": "$TEST_TRANSPORT", 00:25:28.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.167 "adrfam": "ipv4", 00:25:28.167 "trsvcid": "$NVMF_PORT", 00:25:28.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.167 "hdgst": ${hdgst:-false}, 00:25:28.167 "ddgst": ${ddgst:-false} 00:25:28.167 }, 00:25:28.167 "method": "bdev_nvme_attach_controller" 00:25:28.167 } 00:25:28.167 EOF 00:25:28.167 )") 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.426 { 00:25:28.426 "params": { 00:25:28.426 "name": "Nvme$subsystem", 00:25:28.426 "trtype": "$TEST_TRANSPORT", 00:25:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.426 "adrfam": "ipv4", 00:25:28.426 "trsvcid": "$NVMF_PORT", 00:25:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.426 "hdgst": ${hdgst:-false}, 00:25:28.426 "ddgst": ${ddgst:-false} 00:25:28.426 }, 00:25:28.426 "method": "bdev_nvme_attach_controller" 00:25:28.426 } 00:25:28.426 EOF 00:25:28.426 )") 00:25:28.426 [2024-12-15 16:12:56.742542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:28.426 [2024-12-15 16:12:56.742594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910777 ] 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.426 { 00:25:28.426 "params": { 00:25:28.426 "name": "Nvme$subsystem", 00:25:28.426 "trtype": "$TEST_TRANSPORT", 00:25:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.426 "adrfam": "ipv4", 00:25:28.426 "trsvcid": "$NVMF_PORT", 00:25:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.426 "hdgst": ${hdgst:-false}, 00:25:28.426 "ddgst": ${ddgst:-false} 00:25:28.426 }, 00:25:28.426 "method": "bdev_nvme_attach_controller" 00:25:28.426 } 00:25:28.426 EOF 00:25:28.426 )") 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.426 { 00:25:28.426 "params": { 00:25:28.426 "name": "Nvme$subsystem", 00:25:28.426 "trtype": "$TEST_TRANSPORT", 00:25:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.426 "adrfam": "ipv4", 00:25:28.426 "trsvcid": "$NVMF_PORT", 00:25:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.426 "hdgst": ${hdgst:-false}, 00:25:28.426 "ddgst": ${ddgst:-false} 00:25:28.426 }, 00:25:28.426 "method": "bdev_nvme_attach_controller" 00:25:28.426 } 00:25:28.426 EOF 00:25:28.426 )") 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.426 { 00:25:28.426 "params": { 00:25:28.426 "name": "Nvme$subsystem", 00:25:28.426 "trtype": "$TEST_TRANSPORT", 00:25:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.426 "adrfam": "ipv4", 00:25:28.426 "trsvcid": "$NVMF_PORT", 00:25:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.426 "hdgst": ${hdgst:-false}, 00:25:28.426 "ddgst": ${ddgst:-false} 00:25:28.426 }, 00:25:28.426 "method": "bdev_nvme_attach_controller" 00:25:28.426 } 00:25:28.426 EOF 00:25:28.426 )") 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:28.426 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:28.426 { 00:25:28.426 "params": { 00:25:28.426 "name": "Nvme$subsystem", 00:25:28.426 "trtype": "$TEST_TRANSPORT", 00:25:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.426 "adrfam": "ipv4", 00:25:28.426 "trsvcid": "$NVMF_PORT", 00:25:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.426 "hdgst": ${hdgst:-false}, 00:25:28.426 "ddgst": ${ddgst:-false} 00:25:28.426 }, 00:25:28.426 "method": "bdev_nvme_attach_controller" 00:25:28.427 } 00:25:28.427 EOF 00:25:28.427 )") 00:25:28.427 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:28.427 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:25:28.427 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:25:28.427 16:12:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme1", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme2", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme3", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme4", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme5", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme6", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme7", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme8", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme9", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 },{ 00:25:28.427 "params": { 00:25:28.427 "name": "Nvme10", 00:25:28.427 "trtype": "rdma", 00:25:28.427 "traddr": "192.168.100.8", 00:25:28.427 "adrfam": "ipv4", 00:25:28.427 "trsvcid": "4420", 00:25:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:28.427 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:28.427 "hdgst": false, 00:25:28.427 "ddgst": false 00:25:28.427 }, 00:25:28.427 "method": "bdev_nvme_attach_controller" 00:25:28.427 }' 00:25:28.427 [2024-12-15 16:12:56.814814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.427 [2024-12-15 16:12:56.853215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.364 Running I/O for 1 seconds... 00:25:30.561 3296.00 IOPS, 206.00 MiB/s 00:25:30.561 Latency(us) 00:25:30.561 [2024-12-15T15:12:59.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme1n1 : 1.18 339.72 21.23 0.00 0.00 179391.43 46347.06 213070.64 00:25:30.561 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme2n1 : 1.19 376.85 23.55 0.00 0.00 165673.78 9646.90 197971.15 00:25:30.561 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme3n1 : 1.19 403.37 25.21 0.00 0.00 152178.88 5740.95 142606.34 00:25:30.561 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme4n1 : 1.19 402.97 25.19 0.00 0.00 150325.49 9542.04 135895.45 00:25:30.561 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme5n1 : 1.19 389.25 24.33 0.00 0.00 153204.30 10328.47 125829.12 00:25:30.561 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme6n1 : 1.19 402.27 25.14 0.00 0.00 146598.35 10538.19 119118.23 00:25:30.561 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme7n1 : 1.19 388.58 24.29 0.00 0.00 149298.58 10538.19 111568.49 00:25:30.561 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme8n1 : 1.20 401.59 25.10 0.00 0.00 142769.85 8021.61 107374.18 00:25:30.561 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme9n1 : 1.19 377.94 23.62 0.00 0.00 150169.19 8545.89 99824.44 00:25:30.561 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.561 Verification LBA range: start 0x0 length 0x400 00:25:30.561 Nvme10n1 : 1.19 323.49 20.22 0.00 0.00 172784.16 9542.04 218103.81 00:25:30.561 [2024-12-15T15:12:59.131Z] =================================================================================================================== 00:25:30.561 [2024-12-15T15:12:59.131Z] Total : 3806.03 237.88 0.00 0.00 155491.06 5740.95 218103.81 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:30.821 rmmod nvme_rdma 00:25:30.821 rmmod nvme_fabrics 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 2910202 ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 2910202 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2910202 ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2910202 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2910202 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2910202' 00:25:30.821 killing process with pid 2910202 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2910202 00:25:30.821 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2910202 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:31.393 00:25:31.393 real 0m13.279s 00:25:31.393 user 0m28.474s 00:25:31.393 sys 0m6.565s 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:31.393 ************************************ 00:25:31.393 END TEST nvmf_shutdown_tc1 00:25:31.393 ************************************ 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:31.393 ************************************ 00:25:31.393 START TEST nvmf_shutdown_tc2 00:25:31.393 ************************************ 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:31.393 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:31.393 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:31.393 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:31.393 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:31.393 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:31.653 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:31.654 16:12:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:31.654 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:31.654 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:31.654 altname enp217s0f0np0 00:25:31.654 altname ens818f0np0 00:25:31.654 inet 192.168.100.8/24 scope global mlx_0_0 00:25:31.654 valid_lft forever preferred_lft forever 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:31.654 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:31.654 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:31.654 altname enp217s0f1np1 00:25:31.654 altname ens818f1np1 00:25:31.654 inet 192.168.100.9/24 scope global mlx_0_1 00:25:31.654 valid_lft forever preferred_lft forever 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:31.654 192.168.100.9' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:31.654 192.168.100.9' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # head -n 1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:31.654 192.168.100.9' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # tail -n +2 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # head -n 1 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=2911455 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 2911455 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2911455 ']' 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:31.654 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.654 [2024-12-15 16:13:00.181971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:31.654 [2024-12-15 16:13:00.182022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.913 [2024-12-15 16:13:00.252198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.914 [2024-12-15 16:13:00.291939] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.914 [2024-12-15 16:13:00.291977] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.914 [2024-12-15 16:13:00.291987] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.914 [2024-12-15 16:13:00.291995] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.914 [2024-12-15 16:13:00.292002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.914 [2024-12-15 16:13:00.292103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.914 [2024-12-15 16:13:00.292190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.914 [2024-12-15 16:13:00.292300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.914 [2024-12-15 16:13:00.292302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.914 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.914 [2024-12-15 16:13:00.462959] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe01140/0xe05630) succeed. 00:25:31.914 [2024-12-15 16:13:00.473503] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe02780/0xe46cd0) succeed. 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.172 16:13:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.172 Malloc1 00:25:32.172 [2024-12-15 16:13:00.694455] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:32.172 Malloc2 00:25:32.431 Malloc3 00:25:32.431 Malloc4 00:25:32.431 Malloc5 00:25:32.431 Malloc6 00:25:32.431 Malloc7 00:25:32.431 Malloc8 00:25:32.691 Malloc9 00:25:32.691 Malloc10 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2911546 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2911546 /var/tmp/bdevperf.sock 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2911546 ']' 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.691 { 00:25:32.691 "params": { 00:25:32.691 "name": "Nvme$subsystem", 00:25:32.691 "trtype": "$TEST_TRANSPORT", 00:25:32.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.691 "adrfam": "ipv4", 00:25:32.691 "trsvcid": "$NVMF_PORT", 00:25:32.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.691 "hdgst": ${hdgst:-false}, 00:25:32.691 "ddgst": ${ddgst:-false} 00:25:32.691 }, 00:25:32.691 "method": "bdev_nvme_attach_controller" 00:25:32.691 } 00:25:32.691 EOF 00:25:32.691 )") 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.691 { 00:25:32.691 "params": { 00:25:32.691 "name": "Nvme$subsystem", 00:25:32.691 "trtype": "$TEST_TRANSPORT", 00:25:32.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.691 "adrfam": "ipv4", 00:25:32.691 "trsvcid": "$NVMF_PORT", 00:25:32.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.691 "hdgst": ${hdgst:-false}, 00:25:32.691 "ddgst": ${ddgst:-false} 00:25:32.691 }, 00:25:32.691 "method": "bdev_nvme_attach_controller" 00:25:32.691 } 00:25:32.691 EOF 00:25:32.691 )") 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.691 { 00:25:32.691 "params": { 00:25:32.691 "name": "Nvme$subsystem", 00:25:32.691 "trtype": "$TEST_TRANSPORT", 00:25:32.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.691 "adrfam": "ipv4", 00:25:32.691 "trsvcid": "$NVMF_PORT", 00:25:32.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.691 "hdgst": ${hdgst:-false}, 00:25:32.691 "ddgst": ${ddgst:-false} 00:25:32.691 }, 00:25:32.691 "method": "bdev_nvme_attach_controller" 00:25:32.691 } 00:25:32.691 EOF 00:25:32.691 )") 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.691 { 00:25:32.691 "params": { 00:25:32.691 "name": "Nvme$subsystem", 00:25:32.691 "trtype": "$TEST_TRANSPORT", 00:25:32.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.691 "adrfam": "ipv4", 00:25:32.691 "trsvcid": "$NVMF_PORT", 00:25:32.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.691 "hdgst": ${hdgst:-false}, 00:25:32.691 "ddgst": ${ddgst:-false} 00:25:32.691 }, 00:25:32.691 "method": "bdev_nvme_attach_controller" 00:25:32.691 } 00:25:32.691 EOF 00:25:32.691 )") 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.691 { 00:25:32.691 "params": { 00:25:32.691 "name": "Nvme$subsystem", 00:25:32.691 "trtype": "$TEST_TRANSPORT", 00:25:32.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.691 "adrfam": "ipv4", 00:25:32.691 "trsvcid": "$NVMF_PORT", 00:25:32.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.691 "hdgst": ${hdgst:-false}, 00:25:32.691 "ddgst": ${ddgst:-false} 00:25:32.691 }, 00:25:32.691 "method": "bdev_nvme_attach_controller" 00:25:32.691 } 00:25:32.691 EOF 00:25:32.691 )") 00:25:32.691 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 [2024-12-15 16:13:01.174331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:32.692 [2024-12-15 16:13:01.174388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911546 ] 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.692 { 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme$subsystem", 00:25:32.692 "trtype": "$TEST_TRANSPORT", 00:25:32.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "$NVMF_PORT", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.692 "hdgst": ${hdgst:-false}, 00:25:32.692 "ddgst": ${ddgst:-false} 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 } 00:25:32.692 EOF 00:25:32.692 )") 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.692 { 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme$subsystem", 00:25:32.692 "trtype": "$TEST_TRANSPORT", 00:25:32.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "$NVMF_PORT", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.692 "hdgst": ${hdgst:-false}, 00:25:32.692 "ddgst": ${ddgst:-false} 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 } 00:25:32.692 EOF 00:25:32.692 )") 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.692 { 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme$subsystem", 00:25:32.692 "trtype": "$TEST_TRANSPORT", 00:25:32.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "$NVMF_PORT", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.692 "hdgst": ${hdgst:-false}, 00:25:32.692 "ddgst": ${ddgst:-false} 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 } 00:25:32.692 EOF 00:25:32.692 )") 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.692 { 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme$subsystem", 00:25:32.692 "trtype": "$TEST_TRANSPORT", 00:25:32.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "$NVMF_PORT", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.692 "hdgst": ${hdgst:-false}, 00:25:32.692 "ddgst": ${ddgst:-false} 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 } 00:25:32.692 EOF 00:25:32.692 )") 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:32.692 { 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme$subsystem", 00:25:32.692 "trtype": "$TEST_TRANSPORT", 00:25:32.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "$NVMF_PORT", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.692 "hdgst": ${hdgst:-false}, 00:25:32.692 "ddgst": ${ddgst:-false} 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 } 00:25:32.692 EOF 00:25:32.692 )") 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:25:32.692 16:13:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme1", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme2", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme3", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme4", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme5", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme6", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme7", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme8", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme9", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 },{ 00:25:32.692 "params": { 00:25:32.692 "name": "Nvme10", 00:25:32.692 "trtype": "rdma", 00:25:32.692 "traddr": "192.168.100.8", 00:25:32.692 "adrfam": "ipv4", 00:25:32.692 "trsvcid": "4420", 00:25:32.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:32.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:32.692 "hdgst": false, 00:25:32.692 "ddgst": false 00:25:32.692 }, 00:25:32.692 "method": "bdev_nvme_attach_controller" 00:25:32.692 }' 00:25:32.692 [2024-12-15 16:13:01.246889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.952 [2024-12-15 16:13:01.285483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.889 Running I/O for 10 seconds... 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.889 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:34.148 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.148 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=26 00:25:34.148 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 26 -ge 100 ']' 00:25:34.148 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=175 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 175 -ge 100 ']' 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2911546 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2911546 ']' 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2911546 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2911546 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2911546' 00:25:34.408 killing process with pid 2911546 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2911546 00:25:34.408 16:13:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2911546 00:25:34.667 Received shutdown signal, test time was about 0.826849 seconds 00:25:34.667 00:25:34.667 Latency(us) 00:25:34.667 [2024-12-15T15:13:03.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.667 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.667 Verification LBA range: start 0x0 length 0x400 00:25:34.667 Nvme1n1 : 0.81 369.55 23.10 0.00 0.00 169650.33 6448.74 223136.97 00:25:34.667 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.667 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme2n1 : 0.81 393.61 24.60 0.00 0.00 156350.22 6920.60 165255.58 00:25:34.668 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme3n1 : 0.81 393.04 24.56 0.00 0.00 153595.41 8074.04 158544.69 00:25:34.668 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme4n1 : 0.82 394.87 24.68 0.00 0.00 149981.21 3591.37 150994.94 00:25:34.668 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme5n1 : 0.82 391.72 24.48 0.00 0.00 148414.79 9017.75 140089.75 00:25:34.668 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme6n1 : 0.82 391.15 24.45 0.00 0.00 145133.16 9437.18 132540.01 00:25:34.668 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme7n1 : 0.82 390.48 24.40 0.00 0.00 142720.53 9961.47 121634.82 00:25:34.668 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme8n1 : 0.82 389.83 24.36 0.00 0.00 139867.91 10485.76 112407.35 00:25:34.668 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme9n1 : 0.82 389.06 24.32 0.00 0.00 137661.32 11324.62 98146.71 00:25:34.668 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.668 Verification LBA range: start 0x0 length 0x400 00:25:34.668 Nvme10n1 : 0.83 309.85 19.37 0.00 0.00 168722.02 2909.80 231525.58 00:25:34.668 [2024-12-15T15:13:03.238Z] =================================================================================================================== 00:25:34.668 [2024-12-15T15:13:03.238Z] Total : 3813.14 238.32 0.00 0.00 150731.16 2909.80 231525.58 00:25:34.927 16:13:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2911455 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:35.865 rmmod nvme_rdma 00:25:35.865 rmmod nvme_fabrics 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 2911455 ']' 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 2911455 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2911455 ']' 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2911455 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.865 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2911455 00:25:35.866 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.866 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.866 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2911455' 00:25:35.866 killing process with pid 2911455 00:25:35.866 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2911455 00:25:35.866 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2911455 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:36.435 00:25:36.435 real 0m4.954s 00:25:36.435 user 0m19.821s 00:25:36.435 sys 0m1.139s 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.435 ************************************ 00:25:36.435 END TEST nvmf_shutdown_tc2 00:25:36.435 ************************************ 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:36.435 ************************************ 00:25:36.435 START TEST nvmf_shutdown_tc3 00:25:36.435 ************************************ 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:36.435 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:36.436 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:36.436 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:36.436 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:36.436 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:36.436 16:13:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.713 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:36.714 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:36.714 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:36.714 altname enp217s0f0np0 00:25:36.714 altname ens818f0np0 00:25:36.714 inet 192.168.100.8/24 scope global mlx_0_0 00:25:36.714 valid_lft forever preferred_lft forever 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:36.714 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:36.714 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:36.714 altname enp217s0f1np1 00:25:36.714 altname ens818f1np1 00:25:36.714 inet 192.168.100.9/24 scope global mlx_0_1 00:25:36.714 valid_lft forever preferred_lft forever 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:36.714 192.168.100.9' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:36.714 192.168.100.9' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # head -n 1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:36.714 192.168.100.9' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # tail -n +2 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # head -n 1 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=2912418 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 2912418 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2912418 ']' 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.714 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:36.714 [2024-12-15 16:13:05.244246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:36.714 [2024-12-15 16:13:05.244293] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.977 [2024-12-15 16:13:05.312991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.977 [2024-12-15 16:13:05.352555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.977 [2024-12-15 16:13:05.352595] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.977 [2024-12-15 16:13:05.352605] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.977 [2024-12-15 16:13:05.352613] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.977 [2024-12-15 16:13:05.352620] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.977 [2024-12-15 16:13:05.352722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.977 [2024-12-15 16:13:05.352815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.977 [2024-12-15 16:13:05.352923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.977 [2024-12-15 16:13:05.352925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.977 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:36.977 [2024-12-15 16:13:05.522849] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2131140/0x2135630) succeed. 00:25:36.977 [2024-12-15 16:13:05.533481] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2132780/0x2176cd0) succeed. 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.237 16:13:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.237 Malloc1 00:25:37.237 [2024-12-15 16:13:05.754756] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:37.237 Malloc2 00:25:37.496 Malloc3 00:25:37.496 Malloc4 00:25:37.496 Malloc5 00:25:37.496 Malloc6 00:25:37.496 Malloc7 00:25:37.496 Malloc8 00:25:37.756 Malloc9 00:25:37.756 Malloc10 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2912624 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2912624 /var/tmp/bdevperf.sock 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2912624 ']' 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.756 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 [2024-12-15 16:13:06.248599] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:37.757 [2024-12-15 16:13:06.248655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912624 ] 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:37.757 { 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme$subsystem", 00:25:37.757 "trtype": "$TEST_TRANSPORT", 00:25:37.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "$NVMF_PORT", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.757 "hdgst": ${hdgst:-false}, 00:25:37.757 "ddgst": ${ddgst:-false} 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 } 00:25:37.757 EOF 00:25:37.757 )") 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:25:37.757 16:13:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme1", 00:25:37.757 "trtype": "rdma", 00:25:37.757 "traddr": "192.168.100.8", 00:25:37.757 "adrfam": "ipv4", 00:25:37.757 "trsvcid": "4420", 00:25:37.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:37.757 "hdgst": false, 00:25:37.757 "ddgst": false 00:25:37.757 }, 00:25:37.757 "method": "bdev_nvme_attach_controller" 00:25:37.757 },{ 00:25:37.757 "params": { 00:25:37.757 "name": "Nvme2", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme3", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme4", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme5", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme6", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme7", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme8", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme9", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 },{ 00:25:37.758 "params": { 00:25:37.758 "name": "Nvme10", 00:25:37.758 "trtype": "rdma", 00:25:37.758 "traddr": "192.168.100.8", 00:25:37.758 "adrfam": "ipv4", 00:25:37.758 "trsvcid": "4420", 00:25:37.758 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:37.758 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:37.758 "hdgst": false, 00:25:37.758 "ddgst": false 00:25:37.758 }, 00:25:37.758 "method": "bdev_nvme_attach_controller" 00:25:37.758 }' 00:25:37.758 [2024-12-15 16:13:06.324078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.017 [2024-12-15 16:13:06.362153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.954 Running I/O for 10 seconds... 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.213 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.213 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=4 00:25:39.213 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 4 -ge 100 ']' 00:25:39.213 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=148 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 148 -ge 100 ']' 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2912418 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2912418 ']' 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2912418 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2912418 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2912418' 00:25:39.473 killing process with pid 2912418 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2912418 00:25:39.473 16:13:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2912418 00:25:39.992 2654.00 IOPS, 165.88 MiB/s [2024-12-15T15:13:08.562Z] 16:13:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:25:39.992 16:13:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:25:40.563 [2024-12-15 16:13:09.020777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-15 16:13:09.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-15 16:13:09.020830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-15 16:13:09.020839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-15 16:13:09.020848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.020856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.020865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.020873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.023080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.023129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.564 [2024-12-15 16:13:09.023226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.023262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.023297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.023326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.023359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.023389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.023421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.023451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.564 [2024-12-15 16:13:09.025604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.025647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:40.564 [2024-12-15 16:13:09.025716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.025752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.025785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.025816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.025848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.025879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.025910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.025941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.028005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.028048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.564 [2024-12-15 16:13:09.028092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.028106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.028120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.028133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.028146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.028159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.028175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.028187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.030560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.030601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:40.564 [2024-12-15 16:13:09.030654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.030736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.030765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.030798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.030827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.030861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.030890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.033469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.033502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.564 [2024-12-15 16:13:09.033532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.033548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.033562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.033575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.033588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.033601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.033614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.033626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.039525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.039555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.564 [2024-12-15 16:13:09.039581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.039595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.039613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.039626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.039639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.039652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.039665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.039677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.042066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.042112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:40.564 [2024-12-15 16:13:09.042168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.042202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.042237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.042266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.042299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.042334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.042347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.564 [2024-12-15 16:13:09.042359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:0 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.564 [2024-12-15 16:13:09.044072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.564 [2024-12-15 16:13:09.044113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:40.564 [2024-12-15 16:13:09.046356] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a54a80 was disconnected and freed. reset controller. 00:25:40.564 [2024-12-15 16:13:09.046400] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.564 [2024-12-15 16:13:09.048854] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a547c0 was disconnected and freed. reset controller. 00:25:40.564 [2024-12-15 16:13:09.048899] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.564 [2024-12-15 16:13:09.051434] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a54500 was disconnected and freed. reset controller. 00:25:40.564 [2024-12-15 16:13:09.051476] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.564 [2024-12-15 16:13:09.055022] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a54240 was disconnected and freed. reset controller. 00:25:40.564 [2024-12-15 16:13:09.055093] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.565 [2024-12-15 16:13:09.057823] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a05d00 was disconnected and freed. reset controller. 00:25:40.565 [2024-12-15 16:13:09.057881] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.565 [2024-12-15 16:13:09.060050] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a05a40 was disconnected and freed. reset controller. 00:25:40.565 [2024-12-15 16:13:09.060070] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.565 [2024-12-15 16:13:09.062250] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a05780 was disconnected and freed. reset controller. 00:25:40.565 [2024-12-15 16:13:09.064331] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a40f140 was disconnected and freed. reset controller. 00:25:40.565 [2024-12-15 16:13:09.064350] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.565 [2024-12-15 16:13:09.064423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010050000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010071000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010092000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d4000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f5000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010116000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010137000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010158000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101dc000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101fd000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c07000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be6000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc5000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.064982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.064997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b41000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cc9000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ca8000 len:0x10000 key:0x183400 00:25:40.565 [2024-12-15 16:13:09.065331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.565 [2024-12-15 16:13:09.065347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c87000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c66000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f44000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f65000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f86000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fa7000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f9f000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.065980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.065992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x183400 00:25:40.566 [2024-12-15 16:13:09.066240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.068248] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a40ee80 was disconnected and freed. reset controller. 00:25:40.566 [2024-12-15 16:13:09.068274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184900 00:25:40.566 [2024-12-15 16:13:09.068290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.068318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184900 00:25:40.566 [2024-12-15 16:13:09.068332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.068351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184900 00:25:40.566 [2024-12-15 16:13:09.068363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.566 [2024-12-15 16:13:09.068381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184900 00:25:40.567 [2024-12-15 16:13:09.068646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.068978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.068991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.567 [2024-12-15 16:13:09.069505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x183d00 00:25:40.567 [2024-12-15 16:13:09.069520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x183d00 00:25:40.568 [2024-12-15 16:13:09.069551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x183d00 00:25:40.568 [2024-12-15 16:13:09.069582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x183d00 00:25:40.568 [2024-12-15 16:13:09.069613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.069975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.069988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184300 00:25:40.568 [2024-12-15 16:13:09.070232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.070249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184900 00:25:40.568 [2024-12-15 16:13:09.070262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8269b000 sqhd:7250 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.073239] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a40ebc0 was disconnected and freed. reset controller. 00:25:40.568 [2024-12-15 16:13:09.073260] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.568 [2024-12-15 16:13:09.073338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.568 [2024-12-15 16:13:09.073355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:226cf70 sqhd:a0c0 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.073370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.568 [2024-12-15 16:13:09.073383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:226cf70 sqhd:a0c0 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.073398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.568 [2024-12-15 16:13:09.073413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:226cf70 sqhd:a0c0 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.073432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.568 [2024-12-15 16:13:09.073445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:226cf70 sqhd:a0c0 p:0 m:0 dnr:0 00:25:40.568 [2024-12-15 16:13:09.075296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.568 [2024-12-15 16:13:09.075313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:40.568 [2024-12-15 16:13:09.075326] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.568 [2024-12-15 16:13:09.075345] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.568 [2024-12-15 16:13:09.075365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.568 [2024-12-15 16:13:09.075379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:226cf70 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.568 [2024-12-15 16:13:09.075393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.569 [2024-12-15 16:13:09.075405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:226cf70 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.569 [2024-12-15 16:13:09.075419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.569 [2024-12-15 16:13:09.075432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:226cf70 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.569 [2024-12-15 16:13:09.075445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.569 [2024-12-15 16:13:09.075457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18514 cdw0:226cf70 sqhd:5f00 p:0 m:1 dnr:0 00:25:40.569 [2024-12-15 16:13:09.094034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.569 [2024-12-15 16:13:09.094054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.569 [2024-12-15 16:13:09.094065] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094080] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094095] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094110] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094124] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094139] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094154] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.094167] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.569 [2024-12-15 16:13:09.095708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:40.569 [2024-12-15 16:13:09.095720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.569 [2024-12-15 16:13:09.095764] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095782] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095794] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095807] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095823] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095836] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.095847] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.569 [2024-12-15 16:13:09.096126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.569 [2024-12-15 16:13:09.096193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:40.828 task offset: 34944 on job bdev=Nvme1n1 fails 00:25:40.828 00:25:40.828 Latency(us) 00:25:40.828 [2024-12-15T15:13:09.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.828 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme1n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme1n1 : 1.88 136.65 8.54 34.03 0.00 370969.08 5924.45 1053609.16 00:25:40.828 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme2n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme2n1 : 1.88 136.06 8.50 34.01 0.00 368890.35 46976.20 1046898.28 00:25:40.828 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme3n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme3n1 : 1.88 153.00 9.56 34.00 0.00 332541.02 6632.24 1046898.28 00:25:40.828 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme4n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme4n1 : 1.88 152.93 9.56 33.98 0.00 329795.03 14575.21 1040187.39 00:25:40.828 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme5n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme5n1 : 1.88 143.30 8.96 33.97 0.00 344678.86 17720.93 1040187.39 00:25:40.828 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme6n1 ended in about 1.88 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme6n1 : 1.88 152.26 9.52 33.95 0.00 325149.97 22544.38 1033476.51 00:25:40.828 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme7n1 ended in about 1.89 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme7n1 : 1.89 148.48 9.28 33.94 0.00 328925.49 32296.14 1033476.51 00:25:40.828 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme8n1 ended in about 1.89 seconds with error 00:25:40.828 Verification LBA range: start 0x0 length 0x400 00:25:40.828 Nvme8n1 : 1.89 148.41 9.28 33.92 0.00 326105.31 38587.60 1033476.51 00:25:40.828 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.828 Job: Nvme9n1 ended in about 1.87 seconds with error 00:25:40.829 Verification LBA range: start 0x0 length 0x400 00:25:40.829 Nvme9n1 : 1.87 137.09 8.57 34.27 0.00 346979.70 31037.85 1067030.94 00:25:40.829 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.829 Job: Nvme10n1 ended in about 1.84 seconds with error 00:25:40.829 Verification LBA range: start 0x0 length 0x400 00:25:40.829 Nvme10n1 : 1.84 104.20 6.51 34.73 0.00 423157.76 62495.13 1067030.94 00:25:40.829 [2024-12-15T15:13:09.399Z] =================================================================================================================== 00:25:40.829 [2024-12-15T15:13:09.399Z] Total : 1412.37 88.27 340.81 0.00 347362.22 5924.45 1067030.94 00:25:40.829 [2024-12-15 16:13:09.142612] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:40.829 [2024-12-15 16:13:09.143892] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.143910] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.143918] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:25:40.829 [2024-12-15 16:13:09.144004] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144015] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144022] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ae5280 00:25:40.829 [2024-12-15 16:13:09.144091] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144101] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144108] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aba2c0 00:25:40.829 [2024-12-15 16:13:09.144220] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144232] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144239] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a9a040 00:25:40.829 [2024-12-15 16:13:09.144324] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144335] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144342] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abf4c0 00:25:40.829 [2024-12-15 16:13:09.144430] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144441] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144448] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a89000 00:25:40.829 [2024-12-15 16:13:09.144526] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144537] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a9a640 00:25:40.829 [2024-12-15 16:13:09.144642] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144653] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144660] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abf180 00:25:40.829 [2024-12-15 16:13:09.144742] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144753] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144761] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ad20c0 00:25:40.829 [2024-12-15 16:13:09.144848] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.829 [2024-12-15 16:13:09.144859] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.829 [2024-12-15 16:13:09.144867] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ab9ac0 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 2912624 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:41.088 rmmod nvme_rdma 00:25:41.088 rmmod nvme_fabrics 00:25:41.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 125: 2912624 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:25:41.088 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:41.089 00:25:41.089 real 0m4.613s 00:25:41.089 user 0m15.146s 00:25:41.089 sys 0m1.247s 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:41.089 ************************************ 00:25:41.089 END TEST nvmf_shutdown_tc3 00:25:41.089 ************************************ 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:41.089 ************************************ 00:25:41.089 START TEST nvmf_shutdown_tc4 00:25:41.089 ************************************ 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.089 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.349 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:41.350 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:41.350 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:41.350 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:41.350 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:41.350 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.350 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:41.350 altname enp217s0f0np0 00:25:41.350 altname ens818f0np0 00:25:41.350 inet 192.168.100.8/24 scope global mlx_0_0 00:25:41.350 valid_lft forever preferred_lft forever 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.350 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:41.351 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.351 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:41.351 altname enp217s0f1np1 00:25:41.351 altname ens818f1np1 00:25:41.351 inet 192.168.100.9/24 scope global mlx_0_1 00:25:41.351 valid_lft forever preferred_lft forever 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:41.351 192.168.100.9' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # head -n 1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:41.351 192.168.100.9' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:41.351 192.168.100.9' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # tail -n +2 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # head -n 1 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=2913376 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 2913376 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2913376 ']' 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.351 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.610 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.610 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.610 16:13:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:41.610 [2024-12-15 16:13:09.966518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:41.610 [2024-12-15 16:13:09.966568] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.610 [2024-12-15 16:13:10.048093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.610 [2024-12-15 16:13:10.089053] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.610 [2024-12-15 16:13:10.089091] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.610 [2024-12-15 16:13:10.089101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.610 [2024-12-15 16:13:10.089110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.610 [2024-12-15 16:13:10.089116] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.610 [2024-12-15 16:13:10.089220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.610 [2024-12-15 16:13:10.089304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.610 [2024-12-15 16:13:10.089436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.610 [2024-12-15 16:13:10.089438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:41.869 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.869 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:25:41.869 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:41.869 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.869 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.870 [2024-12-15 16:13:10.265736] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x217a140/0x217e630) succeed. 00:25:41.870 [2024-12-15 16:13:10.276354] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x217b780/0x21bfcd0) succeed. 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:41.870 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.129 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:42.129 Malloc1 00:25:42.129 [2024-12-15 16:13:10.503044] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:42.129 Malloc2 00:25:42.129 Malloc3 00:25:42.129 Malloc4 00:25:42.129 Malloc5 00:25:42.388 Malloc6 00:25:42.388 Malloc7 00:25:42.388 Malloc8 00:25:42.388 Malloc9 00:25:42.388 Malloc10 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=2913586 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:25:42.388 16:13:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:25:42.647 [2024-12-15 16:13:11.026375] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 2913376 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2913376 ']' 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2913376 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.924 16:13:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2913376 00:25:47.924 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:47.924 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:47.924 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2913376' 00:25:47.924 killing process with pid 2913376 00:25:47.924 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2913376 00:25:47.924 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2913376 00:25:47.924 NVMe io qpair process completion error 00:25:47.924 NVMe io qpair process completion error 00:25:47.924 NVMe io qpair process completion error 00:25:47.924 NVMe io qpair process completion error 00:25:47.924 NVMe io qpair process completion error 00:25:48.183 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:25:48.183 16:13:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:25:48.752 [2024-12-15 16:13:17.092673] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Submitting Keep Alive failed 00:25:48.752 [2024-12-15 16:13:17.092842] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:48.752 [2024-12-15 16:13:17.092887] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Submitting Keep Alive failed 00:25:48.752 NVMe io qpair process completion error 00:25:48.752 [2024-12-15 16:13:17.095761] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Submitting Keep Alive failed 00:25:48.752 NVMe io qpair process completion error 00:25:48.752 NVMe io qpair process completion error 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 Write completed with error (sct=0, sc=8) 00:25:48.752 NVMe io qpair process completion error 00:25:48.752 NVMe io qpair process completion error 00:25:48.753 NVMe io qpair process completion error 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.753 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 Write completed with error (sct=0, sc=8) 00:25:48.754 NVMe io qpair process completion error 00:25:49.323 16:13:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 2913586 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 [2024-12-15 16:13:18.101365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.584 [2024-12-15 16:13:18.101439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:49.584 [2024-12-15 16:13:18.103527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.584 [2024-12-15 16:13:18.103571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.584 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.105735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.105782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.108909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.585 [2024-12-15 16:13:18.108950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.116803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.116874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.128716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 [2024-12-15 16:13:18.128789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.585 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 [2024-12-15 16:13:18.130845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.586 [2024-12-15 16:13:18.130891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 [2024-12-15 16:13:18.142568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 [2024-12-15 16:13:18.142635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.586 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.587 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 [2024-12-15 16:13:18.153980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 [2024-12-15 16:13:18.154046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.847 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 [2024-12-15 16:13:18.166139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 [2024-12-15 16:13:18.166243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Write completed with error (sct=0, sc=8) 00:25:49.848 Initializing NVMe Controllers 00:25:49.848 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:25:49.848 Controller IO queue size 128, less than required. 00:25:49.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.848 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.848 Controller IO queue size 128, less than required. 00:25:49.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.848 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:25:49.848 Controller IO queue size 128, less than required. 00:25:49.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.848 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:25:49.849 Controller IO queue size 128, less than required. 00:25:49.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:49.849 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:49.849 Initialization complete. Launching workers. 00:25:49.849 ======================================================== 00:25:49.849 Latency(us) 00:25:49.849 Device Information : IOPS MiB/s Average min max 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1597.76 68.65 80528.22 114.21 1226004.17 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1584.18 68.07 81296.72 110.44 1237683.99 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1589.44 68.30 81123.09 113.68 1241858.05 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1617.30 69.49 93173.29 111.90 2236997.97 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1624.26 69.79 92079.30 115.34 2170050.59 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1584.52 68.08 94512.70 112.32 2206344.76 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1579.42 67.87 80846.31 114.02 1200263.98 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1595.39 68.55 93922.56 110.68 2202073.49 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1622.05 69.70 92506.91 110.88 2170812.04 00:25:49.849 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1644.13 70.65 91356.39 115.00 2028525.79 00:25:49.849 ======================================================== 00:25:49.849 Total : 16038.45 689.15 88174.77 110.44 2236997.97 00:25:49.849 00:25:49.849 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:49.849 rmmod nvme_rdma 00:25:49.849 rmmod nvme_fabrics 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:49.849 00:25:49.849 real 0m8.621s 00:25:49.849 user 0m32.128s 00:25:49.849 sys 0m1.363s 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:49.849 ************************************ 00:25:49.849 END TEST nvmf_shutdown_tc4 00:25:49.849 ************************************ 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:25:49.849 00:25:49.849 real 0m32.042s 00:25:49.849 user 1m35.834s 00:25:49.849 sys 0m10.669s 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:49.849 ************************************ 00:25:49.849 END TEST nvmf_shutdown 00:25:49.849 ************************************ 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:49.849 00:25:49.849 real 15m13.371s 00:25:49.849 user 47m5.888s 00:25:49.849 sys 3m8.887s 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.849 16:13:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.849 ************************************ 00:25:49.849 END TEST nvmf_target_extra 00:25:49.849 ************************************ 00:25:49.849 16:13:18 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:25:49.849 16:13:18 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:49.849 16:13:18 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.849 16:13:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:49.849 ************************************ 00:25:49.849 START TEST nvmf_host 00:25:49.849 ************************************ 00:25:49.849 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:25:50.110 * Looking for test storage... 00:25:50.110 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:50.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.110 --rc genhtml_branch_coverage=1 00:25:50.110 --rc genhtml_function_coverage=1 00:25:50.110 --rc genhtml_legend=1 00:25:50.110 --rc geninfo_all_blocks=1 00:25:50.110 --rc geninfo_unexecuted_blocks=1 00:25:50.110 00:25:50.110 ' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:50.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.110 --rc genhtml_branch_coverage=1 00:25:50.110 --rc genhtml_function_coverage=1 00:25:50.110 --rc genhtml_legend=1 00:25:50.110 --rc geninfo_all_blocks=1 00:25:50.110 --rc geninfo_unexecuted_blocks=1 00:25:50.110 00:25:50.110 ' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:50.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.110 --rc genhtml_branch_coverage=1 00:25:50.110 --rc genhtml_function_coverage=1 00:25:50.110 --rc genhtml_legend=1 00:25:50.110 --rc geninfo_all_blocks=1 00:25:50.110 --rc geninfo_unexecuted_blocks=1 00:25:50.110 00:25:50.110 ' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:50.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.110 --rc genhtml_branch_coverage=1 00:25:50.110 --rc genhtml_function_coverage=1 00:25:50.110 --rc genhtml_legend=1 00:25:50.110 --rc geninfo_all_blocks=1 00:25:50.110 --rc geninfo_unexecuted_blocks=1 00:25:50.110 00:25:50.110 ' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.110 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.110 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.371 ************************************ 00:25:50.371 START TEST nvmf_multicontroller 00:25:50.371 ************************************ 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:50.371 * Looking for test storage... 00:25:50.371 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.371 --rc genhtml_branch_coverage=1 00:25:50.371 --rc genhtml_function_coverage=1 00:25:50.371 --rc genhtml_legend=1 00:25:50.371 --rc geninfo_all_blocks=1 00:25:50.371 --rc geninfo_unexecuted_blocks=1 00:25:50.371 00:25:50.371 ' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.371 --rc genhtml_branch_coverage=1 00:25:50.371 --rc genhtml_function_coverage=1 00:25:50.371 --rc genhtml_legend=1 00:25:50.371 --rc geninfo_all_blocks=1 00:25:50.371 --rc geninfo_unexecuted_blocks=1 00:25:50.371 00:25:50.371 ' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.371 --rc genhtml_branch_coverage=1 00:25:50.371 --rc genhtml_function_coverage=1 00:25:50.371 --rc genhtml_legend=1 00:25:50.371 --rc geninfo_all_blocks=1 00:25:50.371 --rc geninfo_unexecuted_blocks=1 00:25:50.371 00:25:50.371 ' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.371 --rc genhtml_branch_coverage=1 00:25:50.371 --rc genhtml_function_coverage=1 00:25:50.371 --rc genhtml_legend=1 00:25:50.371 --rc geninfo_all_blocks=1 00:25:50.371 --rc geninfo_unexecuted_blocks=1 00:25:50.371 00:25:50.371 ' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.371 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.372 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:50.372 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:25:50.372 00:25:50.372 real 0m0.227s 00:25:50.372 user 0m0.132s 00:25:50.372 sys 0m0.112s 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.372 16:13:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:50.372 ************************************ 00:25:50.372 END TEST nvmf_multicontroller 00:25:50.372 ************************************ 00:25:50.633 16:13:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:50.633 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.633 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.633 16:13:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.633 ************************************ 00:25:50.633 START TEST nvmf_aer 00:25:50.633 ************************************ 00:25:50.633 16:13:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:50.633 * Looking for test storage... 00:25:50.633 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:50.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.633 --rc genhtml_branch_coverage=1 00:25:50.633 --rc genhtml_function_coverage=1 00:25:50.633 --rc genhtml_legend=1 00:25:50.633 --rc geninfo_all_blocks=1 00:25:50.633 --rc geninfo_unexecuted_blocks=1 00:25:50.633 00:25:50.633 ' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:50.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.633 --rc genhtml_branch_coverage=1 00:25:50.633 --rc genhtml_function_coverage=1 00:25:50.633 --rc genhtml_legend=1 00:25:50.633 --rc geninfo_all_blocks=1 00:25:50.633 --rc geninfo_unexecuted_blocks=1 00:25:50.633 00:25:50.633 ' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:50.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.633 --rc genhtml_branch_coverage=1 00:25:50.633 --rc genhtml_function_coverage=1 00:25:50.633 --rc genhtml_legend=1 00:25:50.633 --rc geninfo_all_blocks=1 00:25:50.633 --rc geninfo_unexecuted_blocks=1 00:25:50.633 00:25:50.633 ' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:50.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.633 --rc genhtml_branch_coverage=1 00:25:50.633 --rc genhtml_function_coverage=1 00:25:50.633 --rc genhtml_legend=1 00:25:50.633 --rc geninfo_all_blocks=1 00:25:50.633 --rc geninfo_unexecuted_blocks=1 00:25:50.633 00:25:50.633 ' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.633 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.634 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.634 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.634 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.893 16:13:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:57.599 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:57.599 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:57.599 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:57.600 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:57.600 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # rdma_device_init 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:57.600 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:57.600 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:57.600 altname enp217s0f0np0 00:25:57.600 altname ens818f0np0 00:25:57.600 inet 192.168.100.8/24 scope global mlx_0_0 00:25:57.600 valid_lft forever preferred_lft forever 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:57.600 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:57.600 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:57.600 altname enp217s0f1np1 00:25:57.600 altname ens818f1np1 00:25:57.600 inet 192.168.100.9/24 scope global mlx_0_1 00:25:57.600 valid_lft forever preferred_lft forever 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:57.600 192.168.100.9' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:57.600 192.168.100.9' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # head -n 1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:57.600 192.168.100.9' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # tail -n +2 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # head -n 1 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:57.600 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=2918317 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 2918317 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2918317 ']' 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.601 16:13:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.601 [2024-12-15 16:13:25.942736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:57.601 [2024-12-15 16:13:25.942784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.601 [2024-12-15 16:13:26.011472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.601 [2024-12-15 16:13:26.051295] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.601 [2024-12-15 16:13:26.051333] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.601 [2024-12-15 16:13:26.051343] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.601 [2024-12-15 16:13:26.051351] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.601 [2024-12-15 16:13:26.051358] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.601 [2024-12-15 16:13:26.051406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.601 [2024-12-15 16:13:26.051501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.601 [2024-12-15 16:13:26.051565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.601 [2024-12-15 16:13:26.051567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.601 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.601 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:57.601 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:57.601 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:57.601 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 [2024-12-15 16:13:26.231575] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13eee40/0x13f3330) succeed. 00:25:57.861 [2024-12-15 16:13:26.242071] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13f0480/0x14349d0) succeed. 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 Malloc0 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 [2024-12-15 16:13:26.407422] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.861 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:57.861 [ 00:25:57.861 { 00:25:57.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:57.861 "subtype": "Discovery", 00:25:57.861 "listen_addresses": [], 00:25:57.861 "allow_any_host": true, 00:25:57.861 "hosts": [] 00:25:57.861 }, 00:25:57.861 { 00:25:57.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.861 "subtype": "NVMe", 00:25:57.861 "listen_addresses": [ 00:25:57.861 { 00:25:57.861 "trtype": "RDMA", 00:25:57.861 "adrfam": "IPv4", 00:25:57.861 "traddr": "192.168.100.8", 00:25:57.861 "trsvcid": "4420" 00:25:57.861 } 00:25:57.861 ], 00:25:57.861 "allow_any_host": true, 00:25:57.861 "hosts": [], 00:25:57.861 "serial_number": "SPDK00000000000001", 00:25:57.861 "model_number": "SPDK bdev Controller", 00:25:57.861 "max_namespaces": 2, 00:25:57.861 "min_cntlid": 1, 00:25:57.861 "max_cntlid": 65519, 00:25:57.861 "namespaces": [ 00:25:57.861 { 00:25:57.861 "nsid": 1, 00:25:57.861 "bdev_name": "Malloc0", 00:25:57.861 "name": "Malloc0", 00:25:57.861 "nguid": "36692F97EA07476BA88437E948C344FC", 00:25:57.861 "uuid": "36692f97-ea07-476b-a884-37e948c344fc" 00:25:57.861 } 00:25:57.861 ] 00:25:57.861 } 00:25:57.861 ] 00:25:58.120 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.120 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2918346 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.121 Malloc1 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.121 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.380 [ 00:25:58.380 { 00:25:58.380 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:58.380 "subtype": "Discovery", 00:25:58.380 "listen_addresses": [], 00:25:58.380 "allow_any_host": true, 00:25:58.380 "hosts": [] 00:25:58.380 }, 00:25:58.380 { 00:25:58.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.380 "subtype": "NVMe", 00:25:58.380 "listen_addresses": [ 00:25:58.380 { 00:25:58.380 "trtype": "RDMA", 00:25:58.380 "adrfam": "IPv4", 00:25:58.380 "traddr": "192.168.100.8", 00:25:58.380 "trsvcid": "4420" 00:25:58.380 } 00:25:58.380 ], 00:25:58.380 "allow_any_host": true, 00:25:58.380 "hosts": [], 00:25:58.380 "serial_number": "SPDK00000000000001", 00:25:58.380 "model_number": "SPDK bdev Controller", 00:25:58.380 "max_namespaces": 2, 00:25:58.380 "min_cntlid": 1, 00:25:58.380 "max_cntlid": 65519, 00:25:58.380 "namespaces": [ 00:25:58.380 { 00:25:58.380 "nsid": 1, 00:25:58.380 "bdev_name": "Malloc0", 00:25:58.380 "name": "Malloc0", 00:25:58.380 "nguid": "36692F97EA07476BA88437E948C344FC", 00:25:58.380 "uuid": "36692f97-ea07-476b-a884-37e948c344fc" 00:25:58.380 }, 00:25:58.380 { 00:25:58.380 "nsid": 2, 00:25:58.380 "bdev_name": "Malloc1", 00:25:58.380 "name": "Malloc1", 00:25:58.380 "nguid": "E426C3EFC7AB4CBDB6E10ABCE522E3DA", 00:25:58.380 "uuid": "e426c3ef-c7ab-4cbd-b6e1-0abce522e3da" 00:25:58.380 } 00:25:58.380 ] 00:25:58.380 } 00:25:58.380 ] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2918346 00:25:58.380 Asynchronous Event Request test 00:25:58.380 Attaching to 192.168.100.8 00:25:58.380 Attached to 192.168.100.8 00:25:58.380 Registering asynchronous event callbacks... 00:25:58.380 Starting namespace attribute notice tests for all controllers... 00:25:58.380 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:58.380 aer_cb - Changed Namespace 00:25:58.380 Cleaning up... 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:58.380 rmmod nvme_rdma 00:25:58.380 rmmod nvme_fabrics 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 2918317 ']' 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 2918317 00:25:58.380 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2918317 ']' 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2918317 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2918317 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2918317' 00:25:58.381 killing process with pid 2918317 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2918317 00:25:58.381 16:13:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2918317 00:25:58.640 16:13:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:58.640 16:13:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:58.640 00:25:58.640 real 0m8.204s 00:25:58.640 user 0m6.312s 00:25:58.640 sys 0m5.606s 00:25:58.640 16:13:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.640 16:13:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:58.640 ************************************ 00:25:58.640 END TEST nvmf_aer 00:25:58.640 ************************************ 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.900 ************************************ 00:25:58.900 START TEST nvmf_async_init 00:25:58.900 ************************************ 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:58.900 * Looking for test storage... 00:25:58.900 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.900 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.160 --rc genhtml_branch_coverage=1 00:25:59.160 --rc genhtml_function_coverage=1 00:25:59.160 --rc genhtml_legend=1 00:25:59.160 --rc geninfo_all_blocks=1 00:25:59.160 --rc geninfo_unexecuted_blocks=1 00:25:59.160 00:25:59.160 ' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.160 --rc genhtml_branch_coverage=1 00:25:59.160 --rc genhtml_function_coverage=1 00:25:59.160 --rc genhtml_legend=1 00:25:59.160 --rc geninfo_all_blocks=1 00:25:59.160 --rc geninfo_unexecuted_blocks=1 00:25:59.160 00:25:59.160 ' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.160 --rc genhtml_branch_coverage=1 00:25:59.160 --rc genhtml_function_coverage=1 00:25:59.160 --rc genhtml_legend=1 00:25:59.160 --rc geninfo_all_blocks=1 00:25:59.160 --rc geninfo_unexecuted_blocks=1 00:25:59.160 00:25:59.160 ' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.160 --rc genhtml_branch_coverage=1 00:25:59.160 --rc genhtml_function_coverage=1 00:25:59.160 --rc genhtml_legend=1 00:25:59.160 --rc geninfo_all_blocks=1 00:25:59.160 --rc geninfo_unexecuted_blocks=1 00:25:59.160 00:25:59.160 ' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:59.160 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b53a655a19624f76ae0782c0c8c557c7 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.161 16:13:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:05.736 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:05.736 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:05.736 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:05.736 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # rdma_device_init 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:05.736 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:05.737 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:05.737 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:05.737 altname enp217s0f0np0 00:26:05.737 altname ens818f0np0 00:26:05.737 inet 192.168.100.8/24 scope global mlx_0_0 00:26:05.737 valid_lft forever preferred_lft forever 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:05.737 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:05.737 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:05.737 altname enp217s0f1np1 00:26:05.737 altname ens818f1np1 00:26:05.737 inet 192.168.100.9/24 scope global mlx_0_1 00:26:05.737 valid_lft forever preferred_lft forever 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:05.737 192.168.100.9' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:05.737 192.168.100.9' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # head -n 1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:05.737 192.168.100.9' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # tail -n +2 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # head -n 1 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:05.737 16:13:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=2921773 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 2921773 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2921773 ']' 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 [2024-12-15 16:13:34.079563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:05.737 [2024-12-15 16:13:34.079613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.737 [2024-12-15 16:13:34.149616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.737 [2024-12-15 16:13:34.188374] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.737 [2024-12-15 16:13:34.188413] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.737 [2024-12-15 16:13:34.188422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.737 [2024-12-15 16:13:34.188431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.737 [2024-12-15 16:13:34.188438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.737 [2024-12-15 16:13:34.188463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.737 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 [2024-12-15 16:13:34.347370] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x147ebd0/0x14830c0) succeed. 00:26:05.997 [2024-12-15 16:13:34.356396] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14800d0/0x14c4760) succeed. 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 null0 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b53a655a19624f76ae0782c0c8c557c7 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 [2024-12-15 16:13:34.439481] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 nvme0n1 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.997 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 [ 00:26:05.997 { 00:26:05.997 "name": "nvme0n1", 00:26:05.997 "aliases": [ 00:26:05.997 "b53a655a-1962-4f76-ae07-82c0c8c557c7" 00:26:05.997 ], 00:26:05.997 "product_name": "NVMe disk", 00:26:05.997 "block_size": 512, 00:26:05.997 "num_blocks": 2097152, 00:26:05.997 "uuid": "b53a655a-1962-4f76-ae07-82c0c8c557c7", 00:26:05.997 "numa_id": 1, 00:26:05.997 "assigned_rate_limits": { 00:26:05.997 "rw_ios_per_sec": 0, 00:26:05.997 "rw_mbytes_per_sec": 0, 00:26:05.997 "r_mbytes_per_sec": 0, 00:26:05.997 "w_mbytes_per_sec": 0 00:26:05.997 }, 00:26:05.997 "claimed": false, 00:26:05.997 "zoned": false, 00:26:05.997 "supported_io_types": { 00:26:05.997 "read": true, 00:26:05.997 "write": true, 00:26:05.997 "unmap": false, 00:26:05.997 "flush": true, 00:26:05.997 "reset": true, 00:26:05.997 "nvme_admin": true, 00:26:05.997 "nvme_io": true, 00:26:05.997 "nvme_io_md": false, 00:26:05.997 "write_zeroes": true, 00:26:05.997 "zcopy": false, 00:26:05.997 "get_zone_info": false, 00:26:05.997 "zone_management": false, 00:26:05.997 "zone_append": false, 00:26:05.997 "compare": true, 00:26:05.997 "compare_and_write": true, 00:26:05.997 "abort": true, 00:26:05.997 "seek_hole": false, 00:26:05.997 "seek_data": false, 00:26:05.997 "copy": true, 00:26:05.997 "nvme_iov_md": false 00:26:05.997 }, 00:26:05.997 "memory_domains": [ 00:26:05.997 { 00:26:05.997 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:05.997 "dma_device_type": 0 00:26:05.997 } 00:26:05.997 ], 00:26:05.997 "driver_specific": { 00:26:05.997 "nvme": [ 00:26:05.997 { 00:26:05.997 "trid": { 00:26:05.997 "trtype": "RDMA", 00:26:05.997 "adrfam": "IPv4", 00:26:05.997 "traddr": "192.168.100.8", 00:26:05.997 "trsvcid": "4420", 00:26:05.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:05.997 }, 00:26:05.997 "ctrlr_data": { 00:26:05.997 "cntlid": 1, 00:26:05.998 "vendor_id": "0x8086", 00:26:05.998 "model_number": "SPDK bdev Controller", 00:26:05.998 "serial_number": "00000000000000000000", 00:26:05.998 "firmware_revision": "24.09.1", 00:26:05.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.998 "oacs": { 00:26:05.998 "security": 0, 00:26:05.998 "format": 0, 00:26:05.998 "firmware": 0, 00:26:05.998 "ns_manage": 0 00:26:05.998 }, 00:26:05.998 "multi_ctrlr": true, 00:26:05.998 "ana_reporting": false 00:26:05.998 }, 00:26:05.998 "vs": { 00:26:05.998 "nvme_version": "1.3" 00:26:05.998 }, 00:26:05.998 "ns_data": { 00:26:05.998 "id": 1, 00:26:05.998 "can_share": true 00:26:05.998 } 00:26:05.998 } 00:26:05.998 ], 00:26:05.998 "mp_policy": "active_passive" 00:26:05.998 } 00:26:05.998 } 00:26:05.998 ] 00:26:05.998 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.998 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:05.998 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.998 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.998 [2024-12-15 16:13:34.555750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:06.257 [2024-12-15 16:13:34.573178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:06.257 [2024-12-15 16:13:34.597986] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 [ 00:26:06.258 { 00:26:06.258 "name": "nvme0n1", 00:26:06.258 "aliases": [ 00:26:06.258 "b53a655a-1962-4f76-ae07-82c0c8c557c7" 00:26:06.258 ], 00:26:06.258 "product_name": "NVMe disk", 00:26:06.258 "block_size": 512, 00:26:06.258 "num_blocks": 2097152, 00:26:06.258 "uuid": "b53a655a-1962-4f76-ae07-82c0c8c557c7", 00:26:06.258 "numa_id": 1, 00:26:06.258 "assigned_rate_limits": { 00:26:06.258 "rw_ios_per_sec": 0, 00:26:06.258 "rw_mbytes_per_sec": 0, 00:26:06.258 "r_mbytes_per_sec": 0, 00:26:06.258 "w_mbytes_per_sec": 0 00:26:06.258 }, 00:26:06.258 "claimed": false, 00:26:06.258 "zoned": false, 00:26:06.258 "supported_io_types": { 00:26:06.258 "read": true, 00:26:06.258 "write": true, 00:26:06.258 "unmap": false, 00:26:06.258 "flush": true, 00:26:06.258 "reset": true, 00:26:06.258 "nvme_admin": true, 00:26:06.258 "nvme_io": true, 00:26:06.258 "nvme_io_md": false, 00:26:06.258 "write_zeroes": true, 00:26:06.258 "zcopy": false, 00:26:06.258 "get_zone_info": false, 00:26:06.258 "zone_management": false, 00:26:06.258 "zone_append": false, 00:26:06.258 "compare": true, 00:26:06.258 "compare_and_write": true, 00:26:06.258 "abort": true, 00:26:06.258 "seek_hole": false, 00:26:06.258 "seek_data": false, 00:26:06.258 "copy": true, 00:26:06.258 "nvme_iov_md": false 00:26:06.258 }, 00:26:06.258 "memory_domains": [ 00:26:06.258 { 00:26:06.258 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:06.258 "dma_device_type": 0 00:26:06.258 } 00:26:06.258 ], 00:26:06.258 "driver_specific": { 00:26:06.258 "nvme": [ 00:26:06.258 { 00:26:06.258 "trid": { 00:26:06.258 "trtype": "RDMA", 00:26:06.258 "adrfam": "IPv4", 00:26:06.258 "traddr": "192.168.100.8", 00:26:06.258 "trsvcid": "4420", 00:26:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:06.258 }, 00:26:06.258 "ctrlr_data": { 00:26:06.258 "cntlid": 2, 00:26:06.258 "vendor_id": "0x8086", 00:26:06.258 "model_number": "SPDK bdev Controller", 00:26:06.258 "serial_number": "00000000000000000000", 00:26:06.258 "firmware_revision": "24.09.1", 00:26:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.258 "oacs": { 00:26:06.258 "security": 0, 00:26:06.258 "format": 0, 00:26:06.258 "firmware": 0, 00:26:06.258 "ns_manage": 0 00:26:06.258 }, 00:26:06.258 "multi_ctrlr": true, 00:26:06.258 "ana_reporting": false 00:26:06.258 }, 00:26:06.258 "vs": { 00:26:06.258 "nvme_version": "1.3" 00:26:06.258 }, 00:26:06.258 "ns_data": { 00:26:06.258 "id": 1, 00:26:06.258 "can_share": true 00:26:06.258 } 00:26:06.258 } 00:26:06.258 ], 00:26:06.258 "mp_policy": "active_passive" 00:26:06.258 } 00:26:06.258 } 00:26:06.258 ] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ufYGuZj83Y 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ufYGuZj83Y 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ufYGuZj83Y 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 [2024-12-15 16:13:34.689019] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 [2024-12-15 16:13:34.713081] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.258 nvme0n1 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.258 [ 00:26:06.258 { 00:26:06.258 "name": "nvme0n1", 00:26:06.258 "aliases": [ 00:26:06.258 "b53a655a-1962-4f76-ae07-82c0c8c557c7" 00:26:06.258 ], 00:26:06.258 "product_name": "NVMe disk", 00:26:06.258 "block_size": 512, 00:26:06.258 "num_blocks": 2097152, 00:26:06.258 "uuid": "b53a655a-1962-4f76-ae07-82c0c8c557c7", 00:26:06.258 "numa_id": 1, 00:26:06.258 "assigned_rate_limits": { 00:26:06.258 "rw_ios_per_sec": 0, 00:26:06.258 "rw_mbytes_per_sec": 0, 00:26:06.258 "r_mbytes_per_sec": 0, 00:26:06.258 "w_mbytes_per_sec": 0 00:26:06.258 }, 00:26:06.258 "claimed": false, 00:26:06.258 "zoned": false, 00:26:06.258 "supported_io_types": { 00:26:06.258 "read": true, 00:26:06.258 "write": true, 00:26:06.258 "unmap": false, 00:26:06.258 "flush": true, 00:26:06.258 "reset": true, 00:26:06.258 "nvme_admin": true, 00:26:06.258 "nvme_io": true, 00:26:06.258 "nvme_io_md": false, 00:26:06.258 "write_zeroes": true, 00:26:06.258 "zcopy": false, 00:26:06.258 "get_zone_info": false, 00:26:06.258 "zone_management": false, 00:26:06.258 "zone_append": false, 00:26:06.258 "compare": true, 00:26:06.258 "compare_and_write": true, 00:26:06.258 "abort": true, 00:26:06.258 "seek_hole": false, 00:26:06.258 "seek_data": false, 00:26:06.258 "copy": true, 00:26:06.258 "nvme_iov_md": false 00:26:06.258 }, 00:26:06.258 "memory_domains": [ 00:26:06.258 { 00:26:06.258 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:06.258 "dma_device_type": 0 00:26:06.258 } 00:26:06.258 ], 00:26:06.258 "driver_specific": { 00:26:06.258 "nvme": [ 00:26:06.258 { 00:26:06.258 "trid": { 00:26:06.258 "trtype": "RDMA", 00:26:06.258 "adrfam": "IPv4", 00:26:06.258 "traddr": "192.168.100.8", 00:26:06.258 "trsvcid": "4421", 00:26:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:06.258 }, 00:26:06.258 "ctrlr_data": { 00:26:06.258 "cntlid": 3, 00:26:06.258 "vendor_id": "0x8086", 00:26:06.258 "model_number": "SPDK bdev Controller", 00:26:06.258 "serial_number": "00000000000000000000", 00:26:06.258 "firmware_revision": "24.09.1", 00:26:06.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.258 "oacs": { 00:26:06.258 "security": 0, 00:26:06.258 "format": 0, 00:26:06.258 "firmware": 0, 00:26:06.258 "ns_manage": 0 00:26:06.258 }, 00:26:06.258 "multi_ctrlr": true, 00:26:06.258 "ana_reporting": false 00:26:06.258 }, 00:26:06.258 "vs": { 00:26:06.258 "nvme_version": "1.3" 00:26:06.258 }, 00:26:06.258 "ns_data": { 00:26:06.258 "id": 1, 00:26:06.258 "can_share": true 00:26:06.258 } 00:26:06.258 } 00:26:06.258 ], 00:26:06.258 "mp_policy": "active_passive" 00:26:06.258 } 00:26:06.258 } 00:26:06.258 ] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.258 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ufYGuZj83Y 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:06.518 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:06.519 rmmod nvme_rdma 00:26:06.519 rmmod nvme_fabrics 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 2921773 ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 2921773 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2921773 ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2921773 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2921773 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2921773' 00:26:06.519 killing process with pid 2921773 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2921773 00:26:06.519 16:13:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2921773 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:26:06.777 00:26:06.777 real 0m7.910s 00:26:06.777 user 0m3.098s 00:26:06.777 sys 0m5.404s 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.777 ************************************ 00:26:06.777 END TEST nvmf_async_init 00:26:06.777 ************************************ 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.777 ************************************ 00:26:06.777 START TEST dma 00:26:06.777 ************************************ 00:26:06.777 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:07.037 * Looking for test storage... 00:26:07.037 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:07.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.037 --rc genhtml_branch_coverage=1 00:26:07.037 --rc genhtml_function_coverage=1 00:26:07.037 --rc genhtml_legend=1 00:26:07.037 --rc geninfo_all_blocks=1 00:26:07.037 --rc geninfo_unexecuted_blocks=1 00:26:07.037 00:26:07.037 ' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:07.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.037 --rc genhtml_branch_coverage=1 00:26:07.037 --rc genhtml_function_coverage=1 00:26:07.037 --rc genhtml_legend=1 00:26:07.037 --rc geninfo_all_blocks=1 00:26:07.037 --rc geninfo_unexecuted_blocks=1 00:26:07.037 00:26:07.037 ' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:07.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.037 --rc genhtml_branch_coverage=1 00:26:07.037 --rc genhtml_function_coverage=1 00:26:07.037 --rc genhtml_legend=1 00:26:07.037 --rc geninfo_all_blocks=1 00:26:07.037 --rc geninfo_unexecuted_blocks=1 00:26:07.037 00:26:07.037 ' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:07.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.037 --rc genhtml_branch_coverage=1 00:26:07.037 --rc genhtml_function_coverage=1 00:26:07.037 --rc genhtml_legend=1 00:26:07.037 --rc geninfo_all_blocks=1 00:26:07.037 --rc geninfo_unexecuted_blocks=1 00:26:07.037 00:26:07.037 ' 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.037 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.038 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.038 16:13:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:13.614 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:13.614 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:13.614 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.614 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:13.615 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # is_hw=yes 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # rdma_device_init 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:13.615 16:13:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:13.615 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:13.615 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:13.615 altname enp217s0f0np0 00:26:13.615 altname ens818f0np0 00:26:13.615 inet 192.168.100.8/24 scope global mlx_0_0 00:26:13.615 valid_lft forever preferred_lft forever 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:13.615 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:13.615 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:13.615 altname enp217s0f1np1 00:26:13.615 altname ens818f1np1 00:26:13.615 inet 192.168.100.9/24 scope global mlx_0_1 00:26:13.615 valid_lft forever preferred_lft forever 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # return 0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:13.615 192.168.100.9' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:13.615 192.168.100.9' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # head -n 1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:13.615 192.168.100.9' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # head -n 1 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # tail -n +2 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # nvmfpid=2925215 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # waitforlisten 2925215 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 2925215 ']' 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.615 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.616 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:13.875 [2024-12-15 16:13:42.224979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:13.875 [2024-12-15 16:13:42.225031] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.875 [2024-12-15 16:13:42.294233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.875 [2024-12-15 16:13:42.333394] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.875 [2024-12-15 16:13:42.333438] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.875 [2024-12-15 16:13:42.333448] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.875 [2024-12-15 16:13:42.333457] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.875 [2024-12-15 16:13:42.333464] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.875 [2024-12-15 16:13:42.333518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.875 [2024-12-15 16:13:42.333521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.875 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.875 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:26:13.875 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:13.875 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.875 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.134 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.134 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:14.134 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.135 [2024-12-15 16:13:42.488677] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x117d720/0x1181c10) succeed. 00:26:14.135 [2024-12-15 16:13:42.497594] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x117ec20/0x11c32b0) succeed. 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.135 Malloc0 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:14.135 [2024-12-15 16:13:42.643414] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # config=() 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # local subsystem config 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:26:14.135 { 00:26:14.135 "params": { 00:26:14.135 "name": "Nvme$subsystem", 00:26:14.135 "trtype": "$TEST_TRANSPORT", 00:26:14.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.135 "adrfam": "ipv4", 00:26:14.135 "trsvcid": "$NVMF_PORT", 00:26:14.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.135 "hdgst": ${hdgst:-false}, 00:26:14.135 "ddgst": ${ddgst:-false} 00:26:14.135 }, 00:26:14.135 "method": "bdev_nvme_attach_controller" 00:26:14.135 } 00:26:14.135 EOF 00:26:14.135 )") 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@578 -- # cat 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # jq . 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@581 -- # IFS=, 00:26:14.135 16:13:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:26:14.135 "params": { 00:26:14.135 "name": "Nvme0", 00:26:14.135 "trtype": "rdma", 00:26:14.135 "traddr": "192.168.100.8", 00:26:14.135 "adrfam": "ipv4", 00:26:14.135 "trsvcid": "4420", 00:26:14.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:14.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:14.135 "hdgst": false, 00:26:14.135 "ddgst": false 00:26:14.135 }, 00:26:14.135 "method": "bdev_nvme_attach_controller" 00:26:14.135 }' 00:26:14.135 [2024-12-15 16:13:42.690828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:14.135 [2024-12-15 16:13:42.690874] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925264 ] 00:26:14.394 [2024-12-15 16:13:42.757179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:14.394 [2024-12-15 16:13:42.796515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.394 [2024-12-15 16:13:42.796518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.670 bdev Nvme0n1 reports 1 memory domains 00:26:19.670 bdev Nvme0n1 supports RDMA memory domain 00:26:19.670 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:19.670 ========================================================================== 00:26:19.670 Latency [us] 00:26:19.670 IOPS MiB/s Average min max 00:26:19.670 Core 2: 21611.44 84.42 739.73 247.79 7751.65 00:26:19.670 Core 3: 21672.03 84.66 737.62 238.16 7685.97 00:26:19.670 ========================================================================== 00:26:19.670 Total : 43283.47 169.08 738.67 238.16 7751.65 00:26:19.670 00:26:19.670 Total operations: 216461, translate 216461 pull_push 0 memzero 0 00:26:19.670 16:13:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:26:19.670 16:13:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:26:19.670 16:13:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:26:19.670 [2024-12-15 16:13:48.226545] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:19.670 [2024-12-15 16:13:48.226599] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926300 ] 00:26:19.929 [2024-12-15 16:13:48.292950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:19.929 [2024-12-15 16:13:48.332479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.929 [2024-12-15 16:13:48.332481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.206 bdev Malloc0 reports 2 memory domains 00:26:25.206 bdev Malloc0 doesn't support RDMA memory domain 00:26:25.206 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:25.206 ========================================================================== 00:26:25.206 Latency [us] 00:26:25.206 IOPS MiB/s Average min max 00:26:25.206 Core 2: 14345.77 56.04 1114.64 416.72 1851.05 00:26:25.206 Core 3: 14538.90 56.79 1099.80 450.90 1937.91 00:26:25.206 ========================================================================== 00:26:25.206 Total : 28884.67 112.83 1107.17 416.72 1937.91 00:26:25.206 00:26:25.206 Total operations: 144478, translate 0 pull_push 577912 memzero 0 00:26:25.206 16:13:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:26:25.206 16:13:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:26:25.206 16:13:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:25.206 16:13:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:25.206 Ignoring -M option 00:26:25.206 [2024-12-15 16:13:53.676328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:25.206 [2024-12-15 16:13:53.676382] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927110 ] 00:26:25.206 [2024-12-15 16:13:53.743767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:25.465 [2024-12-15 16:13:53.783233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.465 [2024-12-15 16:13:53.783236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.740 bdev 2add990d-9132-4689-bd4e-a77a173bd042 reports 1 memory domains 00:26:30.740 bdev 2add990d-9132-4689-bd4e-a77a173bd042 supports RDMA memory domain 00:26:30.740 Initialization complete, running randread IO for 5 sec on 2 cores 00:26:30.740 ========================================================================== 00:26:30.740 Latency [us] 00:26:30.740 IOPS MiB/s Average min max 00:26:30.740 Core 2: 65936.21 257.56 241.73 98.60 3450.70 00:26:30.740 Core 3: 68155.74 266.23 233.85 94.80 3400.74 00:26:30.740 ========================================================================== 00:26:30.740 Total : 134091.95 523.80 237.72 94.80 3450.70 00:26:30.740 00:26:30.740 Total operations: 670539, translate 0 pull_push 0 memzero 670539 00:26:30.740 16:13:59 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:26:31.000 [2024-12-15 16:13:59.336729] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:33.537 Initializing NVMe Controllers 00:26:33.537 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:26:33.537 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:33.537 Initialization complete. Launching workers. 00:26:33.537 ======================================================== 00:26:33.537 Latency(us) 00:26:33.537 Device Information : IOPS MiB/s Average min max 00:26:33.537 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7965.87 4989.11 10986.20 00:26:33.537 ======================================================== 00:26:33.537 Total : 2016.00 7.88 7965.87 4989.11 10986.20 00:26:33.537 00:26:33.537 16:14:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:26:33.537 16:14:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:26:33.537 16:14:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:33.537 16:14:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:33.537 [2024-12-15 16:14:01.681975] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:33.537 [2024-12-15 16:14:01.682025] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928437 ] 00:26:33.537 [2024-12-15 16:14:01.750099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.537 [2024-12-15 16:14:01.789554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.537 [2024-12-15 16:14:01.789556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.813 bdev 7608e7c8-4f9f-4c14-9427-25468305f6ac reports 1 memory domains 00:26:38.813 bdev 7608e7c8-4f9f-4c14-9427-25468305f6ac supports RDMA memory domain 00:26:38.813 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:38.813 ========================================================================== 00:26:38.813 Latency [us] 00:26:38.813 IOPS MiB/s Average min max 00:26:38.813 Core 2: 18924.52 73.92 844.82 15.52 12599.78 00:26:38.813 Core 3: 19209.09 75.04 832.28 12.33 12246.15 00:26:38.813 ========================================================================== 00:26:38.813 Total : 38133.61 148.96 838.51 12.33 12599.78 00:26:38.813 00:26:38.813 Total operations: 190689, translate 190583 pull_push 0 memzero 106 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:38.813 rmmod nvme_rdma 00:26:38.813 rmmod nvme_fabrics 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@513 -- # '[' -n 2925215 ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # killprocess 2925215 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 2925215 ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 2925215 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2925215 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2925215' 00:26:38.813 killing process with pid 2925215 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 2925215 00:26:38.813 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 2925215 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:26:39.382 00:26:39.382 real 0m32.388s 00:26:39.382 user 1m35.103s 00:26:39.382 sys 0m6.294s 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:39.382 ************************************ 00:26:39.382 END TEST dma 00:26:39.382 ************************************ 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.382 ************************************ 00:26:39.382 START TEST nvmf_identify 00:26:39.382 ************************************ 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:39.382 * Looking for test storage... 00:26:39.382 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:39.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.382 --rc genhtml_branch_coverage=1 00:26:39.382 --rc genhtml_function_coverage=1 00:26:39.382 --rc genhtml_legend=1 00:26:39.382 --rc geninfo_all_blocks=1 00:26:39.382 --rc geninfo_unexecuted_blocks=1 00:26:39.382 00:26:39.382 ' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:39.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.382 --rc genhtml_branch_coverage=1 00:26:39.382 --rc genhtml_function_coverage=1 00:26:39.382 --rc genhtml_legend=1 00:26:39.382 --rc geninfo_all_blocks=1 00:26:39.382 --rc geninfo_unexecuted_blocks=1 00:26:39.382 00:26:39.382 ' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:39.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.382 --rc genhtml_branch_coverage=1 00:26:39.382 --rc genhtml_function_coverage=1 00:26:39.382 --rc genhtml_legend=1 00:26:39.382 --rc geninfo_all_blocks=1 00:26:39.382 --rc geninfo_unexecuted_blocks=1 00:26:39.382 00:26:39.382 ' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:39.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.382 --rc genhtml_branch_coverage=1 00:26:39.382 --rc genhtml_function_coverage=1 00:26:39.382 --rc genhtml_legend=1 00:26:39.382 --rc geninfo_all_blocks=1 00:26:39.382 --rc geninfo_unexecuted_blocks=1 00:26:39.382 00:26:39.382 ' 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.382 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.646 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.646 16:14:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.303 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.303 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:46.303 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:46.304 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:46.304 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:46.304 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:46.304 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # rdma_device_init 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:46.304 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:46.304 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:46.304 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:46.305 altname enp217s0f0np0 00:26:46.305 altname ens818f0np0 00:26:46.305 inet 192.168.100.8/24 scope global mlx_0_0 00:26:46.305 valid_lft forever preferred_lft forever 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:46.305 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:46.305 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:46.305 altname enp217s0f1np1 00:26:46.305 altname ens818f1np1 00:26:46.305 inet 192.168.100.9/24 scope global mlx_0_1 00:26:46.305 valid_lft forever preferred_lft forever 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:46.305 192.168.100.9' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:46.305 192.168.100.9' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # head -n 1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:46.305 192.168.100.9' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # tail -n +2 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # head -n 1 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2932662 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2932662 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2932662 ']' 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.305 [2024-12-15 16:14:14.448270] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:46.305 [2024-12-15 16:14:14.448330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.305 [2024-12-15 16:14:14.519693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.305 [2024-12-15 16:14:14.562284] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.305 [2024-12-15 16:14:14.562326] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.305 [2024-12-15 16:14:14.562336] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.305 [2024-12-15 16:14:14.562345] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.305 [2024-12-15 16:14:14.562352] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.305 [2024-12-15 16:14:14.563707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.305 [2024-12-15 16:14:14.563727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.305 [2024-12-15 16:14:14.563815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.305 [2024-12-15 16:14:14.563826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.305 [2024-12-15 16:14:14.704564] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa69e40/0xa6e330) succeed. 00:26:46.305 [2024-12-15 16:14:14.715133] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa6b480/0xaaf9d0) succeed. 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:46.305 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 Malloc0 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 [2024-12-15 16:14:14.924369] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 [ 00:26:46.569 { 00:26:46.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:46.569 "subtype": "Discovery", 00:26:46.569 "listen_addresses": [ 00:26:46.569 { 00:26:46.569 "trtype": "RDMA", 00:26:46.569 "adrfam": "IPv4", 00:26:46.569 "traddr": "192.168.100.8", 00:26:46.569 "trsvcid": "4420" 00:26:46.569 } 00:26:46.569 ], 00:26:46.569 "allow_any_host": true, 00:26:46.569 "hosts": [] 00:26:46.569 }, 00:26:46.569 { 00:26:46.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.569 "subtype": "NVMe", 00:26:46.569 "listen_addresses": [ 00:26:46.569 { 00:26:46.569 "trtype": "RDMA", 00:26:46.569 "adrfam": "IPv4", 00:26:46.569 "traddr": "192.168.100.8", 00:26:46.569 "trsvcid": "4420" 00:26:46.569 } 00:26:46.569 ], 00:26:46.569 "allow_any_host": true, 00:26:46.569 "hosts": [], 00:26:46.569 "serial_number": "SPDK00000000000001", 00:26:46.569 "model_number": "SPDK bdev Controller", 00:26:46.569 "max_namespaces": 32, 00:26:46.569 "min_cntlid": 1, 00:26:46.569 "max_cntlid": 65519, 00:26:46.569 "namespaces": [ 00:26:46.569 { 00:26:46.569 "nsid": 1, 00:26:46.569 "bdev_name": "Malloc0", 00:26:46.569 "name": "Malloc0", 00:26:46.569 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:46.569 "eui64": "ABCDEF0123456789", 00:26:46.569 "uuid": "285dac30-286b-449e-a1eb-5570a9d0be6b" 00:26:46.569 } 00:26:46.569 ] 00:26:46.569 } 00:26:46.569 ] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.569 16:14:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:46.569 [2024-12-15 16:14:14.983720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:46.569 [2024-12-15 16:14:14.983759] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932775 ] 00:26:46.569 [2024-12-15 16:14:15.034277] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:46.569 [2024-12-15 16:14:15.034351] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:46.570 [2024-12-15 16:14:15.034387] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:46.570 [2024-12-15 16:14:15.034392] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:46.570 [2024-12-15 16:14:15.034423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:46.570 [2024-12-15 16:14:15.042093] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:46.570 [2024-12-15 16:14:15.056170] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:46.570 [2024-12-15 16:14:15.056182] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:46.570 [2024-12-15 16:14:15.056189] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056197] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056203] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056210] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056216] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056222] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056228] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056237] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056244] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056250] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056256] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056262] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056268] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056274] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056280] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056287] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056293] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056299] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056305] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056311] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056318] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056324] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056330] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056336] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056342] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056349] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056355] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056361] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056367] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056373] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056380] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056385] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:46.570 [2024-12-15 16:14:15.056392] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:46.570 [2024-12-15 16:14:15.056396] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:46.570 [2024-12-15 16:14:15.056418] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.056431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x184400 00:26:46.570 [2024-12-15 16:14:15.061690] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.061701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.061709] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061719] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:46.570 [2024-12-15 16:14:15.061726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:46.570 [2024-12-15 16:14:15.061733] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:46.570 [2024-12-15 16:14:15.061747] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.061782] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.061789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.061796] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:46.570 [2024-12-15 16:14:15.061802] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061808] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:46.570 [2024-12-15 16:14:15.061816] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.061842] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.061848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.061855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:46.570 [2024-12-15 16:14:15.061861] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.061875] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.061908] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.061913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.061920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.061926] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061934] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.061963] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.061969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.061975] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:46.570 [2024-12-15 16:14:15.061983] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.061989] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.061996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.062102] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:46.570 [2024-12-15 16:14:15.062108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.062118] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.062126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.062151] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.062156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.062163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:46.570 [2024-12-15 16:14:15.062169] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.062177] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.062185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.570 [2024-12-15 16:14:15.062206] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.570 [2024-12-15 16:14:15.062212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:46.570 [2024-12-15 16:14:15.062218] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:46.570 [2024-12-15 16:14:15.062224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:46.570 [2024-12-15 16:14:15.062230] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.570 [2024-12-15 16:14:15.062237] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:46.571 [2024-12-15 16:14:15.062252] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:46.571 [2024-12-15 16:14:15.062262] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184400 00:26:46.571 [2024-12-15 16:14:15.062309] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062324] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:46.571 [2024-12-15 16:14:15.062330] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:46.571 [2024-12-15 16:14:15.062336] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:46.571 [2024-12-15 16:14:15.062344] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:46.571 [2024-12-15 16:14:15.062350] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:46.571 [2024-12-15 16:14:15.062356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:46.571 [2024-12-15 16:14:15.062362] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:46.571 [2024-12-15 16:14:15.062380] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.571 [2024-12-15 16:14:15.062414] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062430] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.571 [2024-12-15 16:14:15.062444] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.571 [2024-12-15 16:14:15.062458] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.571 [2024-12-15 16:14:15.062472] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.571 [2024-12-15 16:14:15.062485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:46.571 [2024-12-15 16:14:15.062490] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:46.571 [2024-12-15 16:14:15.062509] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.571 [2024-12-15 16:14:15.062540] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062553] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:46.571 [2024-12-15 16:14:15.062559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:46.571 [2024-12-15 16:14:15.062567] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062576] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184400 00:26:46.571 [2024-12-15 16:14:15.062610] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062623] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062634] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:46.571 [2024-12-15 16:14:15.062658] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184400 00:26:46.571 [2024-12-15 16:14:15.062674] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.571 [2024-12-15 16:14:15.062700] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062717] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184400 00:26:46.571 [2024-12-15 16:14:15.062731] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062737] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062749] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062755] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062770] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184400 00:26:46.571 [2024-12-15 16:14:15.062784] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.571 [2024-12-15 16:14:15.062803] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.571 [2024-12-15 16:14:15.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:46.571 [2024-12-15 16:14:15.062819] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.571 ===================================================== 00:26:46.571 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:46.571 ===================================================== 00:26:46.571 Controller Capabilities/Features 00:26:46.571 ================================ 00:26:46.571 Vendor ID: 0000 00:26:46.571 Subsystem Vendor ID: 0000 00:26:46.571 Serial Number: .................... 00:26:46.571 Model Number: ........................................ 00:26:46.571 Firmware Version: 24.09.1 00:26:46.571 Recommended Arb Burst: 0 00:26:46.571 IEEE OUI Identifier: 00 00 00 00:26:46.571 Multi-path I/O 00:26:46.571 May have multiple subsystem ports: No 00:26:46.571 May have multiple controllers: No 00:26:46.571 Associated with SR-IOV VF: No 00:26:46.571 Max Data Transfer Size: 131072 00:26:46.571 Max Number of Namespaces: 0 00:26:46.571 Max Number of I/O Queues: 1024 00:26:46.571 NVMe Specification Version (VS): 1.3 00:26:46.571 NVMe Specification Version (Identify): 1.3 00:26:46.571 Maximum Queue Entries: 128 00:26:46.571 Contiguous Queues Required: Yes 00:26:46.571 Arbitration Mechanisms Supported 00:26:46.571 Weighted Round Robin: Not Supported 00:26:46.571 Vendor Specific: Not Supported 00:26:46.571 Reset Timeout: 15000 ms 00:26:46.571 Doorbell Stride: 4 bytes 00:26:46.571 NVM Subsystem Reset: Not Supported 00:26:46.571 Command Sets Supported 00:26:46.571 NVM Command Set: Supported 00:26:46.571 Boot Partition: Not Supported 00:26:46.571 Memory Page Size Minimum: 4096 bytes 00:26:46.571 Memory Page Size Maximum: 4096 bytes 00:26:46.571 Persistent Memory Region: Not Supported 00:26:46.571 Optional Asynchronous Events Supported 00:26:46.571 Namespace Attribute Notices: Not Supported 00:26:46.571 Firmware Activation Notices: Not Supported 00:26:46.571 ANA Change Notices: Not Supported 00:26:46.571 PLE Aggregate Log Change Notices: Not Supported 00:26:46.571 LBA Status Info Alert Notices: Not Supported 00:26:46.571 EGE Aggregate Log Change Notices: Not Supported 00:26:46.571 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.571 Zone Descriptor Change Notices: Not Supported 00:26:46.571 Discovery Log Change Notices: Supported 00:26:46.571 Controller Attributes 00:26:46.571 128-bit Host Identifier: Not Supported 00:26:46.571 Non-Operational Permissive Mode: Not Supported 00:26:46.571 NVM Sets: Not Supported 00:26:46.571 Read Recovery Levels: Not Supported 00:26:46.571 Endurance Groups: Not Supported 00:26:46.571 Predictable Latency Mode: Not Supported 00:26:46.571 Traffic Based Keep ALive: Not Supported 00:26:46.571 Namespace Granularity: Not Supported 00:26:46.571 SQ Associations: Not Supported 00:26:46.571 UUID List: Not Supported 00:26:46.571 Multi-Domain Subsystem: Not Supported 00:26:46.571 Fixed Capacity Management: Not Supported 00:26:46.572 Variable Capacity Management: Not Supported 00:26:46.572 Delete Endurance Group: Not Supported 00:26:46.572 Delete NVM Set: Not Supported 00:26:46.572 Extended LBA Formats Supported: Not Supported 00:26:46.572 Flexible Data Placement Supported: Not Supported 00:26:46.572 00:26:46.572 Controller Memory Buffer Support 00:26:46.572 ================================ 00:26:46.572 Supported: No 00:26:46.572 00:26:46.572 Persistent Memory Region Support 00:26:46.572 ================================ 00:26:46.572 Supported: No 00:26:46.572 00:26:46.572 Admin Command Set Attributes 00:26:46.572 ============================ 00:26:46.572 Security Send/Receive: Not Supported 00:26:46.572 Format NVM: Not Supported 00:26:46.572 Firmware Activate/Download: Not Supported 00:26:46.572 Namespace Management: Not Supported 00:26:46.572 Device Self-Test: Not Supported 00:26:46.572 Directives: Not Supported 00:26:46.572 NVMe-MI: Not Supported 00:26:46.572 Virtualization Management: Not Supported 00:26:46.572 Doorbell Buffer Config: Not Supported 00:26:46.572 Get LBA Status Capability: Not Supported 00:26:46.572 Command & Feature Lockdown Capability: Not Supported 00:26:46.572 Abort Command Limit: 1 00:26:46.572 Async Event Request Limit: 4 00:26:46.572 Number of Firmware Slots: N/A 00:26:46.572 Firmware Slot 1 Read-Only: N/A 00:26:46.572 Firmware Activation Without Reset: N/A 00:26:46.572 Multiple Update Detection Support: N/A 00:26:46.572 Firmware Update Granularity: No Information Provided 00:26:46.572 Per-Namespace SMART Log: No 00:26:46.572 Asymmetric Namespace Access Log Page: Not Supported 00:26:46.572 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:46.572 Command Effects Log Page: Not Supported 00:26:46.572 Get Log Page Extended Data: Supported 00:26:46.572 Telemetry Log Pages: Not Supported 00:26:46.572 Persistent Event Log Pages: Not Supported 00:26:46.572 Supported Log Pages Log Page: May Support 00:26:46.572 Commands Supported & Effects Log Page: Not Supported 00:26:46.572 Feature Identifiers & Effects Log Page:May Support 00:26:46.572 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.572 Data Area 4 for Telemetry Log: Not Supported 00:26:46.572 Error Log Page Entries Supported: 128 00:26:46.572 Keep Alive: Not Supported 00:26:46.572 00:26:46.572 NVM Command Set Attributes 00:26:46.572 ========================== 00:26:46.572 Submission Queue Entry Size 00:26:46.572 Max: 1 00:26:46.572 Min: 1 00:26:46.572 Completion Queue Entry Size 00:26:46.572 Max: 1 00:26:46.572 Min: 1 00:26:46.572 Number of Namespaces: 0 00:26:46.572 Compare Command: Not Supported 00:26:46.572 Write Uncorrectable Command: Not Supported 00:26:46.572 Dataset Management Command: Not Supported 00:26:46.572 Write Zeroes Command: Not Supported 00:26:46.572 Set Features Save Field: Not Supported 00:26:46.572 Reservations: Not Supported 00:26:46.572 Timestamp: Not Supported 00:26:46.572 Copy: Not Supported 00:26:46.572 Volatile Write Cache: Not Present 00:26:46.572 Atomic Write Unit (Normal): 1 00:26:46.572 Atomic Write Unit (PFail): 1 00:26:46.572 Atomic Compare & Write Unit: 1 00:26:46.572 Fused Compare & Write: Supported 00:26:46.572 Scatter-Gather List 00:26:46.572 SGL Command Set: Supported 00:26:46.572 SGL Keyed: Supported 00:26:46.572 SGL Bit Bucket Descriptor: Not Supported 00:26:46.572 SGL Metadata Pointer: Not Supported 00:26:46.572 Oversized SGL: Not Supported 00:26:46.572 SGL Metadata Address: Not Supported 00:26:46.572 SGL Offset: Supported 00:26:46.572 Transport SGL Data Block: Not Supported 00:26:46.572 Replay Protected Memory Block: Not Supported 00:26:46.572 00:26:46.572 Firmware Slot Information 00:26:46.572 ========================= 00:26:46.572 Active slot: 0 00:26:46.572 00:26:46.572 00:26:46.572 Error Log 00:26:46.572 ========= 00:26:46.572 00:26:46.572 Active Namespaces 00:26:46.572 ================= 00:26:46.572 Discovery Log Page 00:26:46.572 ================== 00:26:46.572 Generation Counter: 2 00:26:46.572 Number of Records: 2 00:26:46.572 Record Format: 0 00:26:46.572 00:26:46.572 Discovery Log Entry 0 00:26:46.572 ---------------------- 00:26:46.572 Transport Type: 1 (RDMA) 00:26:46.572 Address Family: 1 (IPv4) 00:26:46.572 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:46.572 Entry Flags: 00:26:46.572 Duplicate Returned Information: 1 00:26:46.572 Explicit Persistent Connection Support for Discovery: 1 00:26:46.572 Transport Requirements: 00:26:46.572 Secure Channel: Not Required 00:26:46.572 Port ID: 0 (0x0000) 00:26:46.572 Controller ID: 65535 (0xffff) 00:26:46.572 Admin Max SQ Size: 128 00:26:46.572 Transport Service Identifier: 4420 00:26:46.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:46.572 Transport Address: 192.168.100.8 00:26:46.572 Transport Specific Address Subtype - RDMA 00:26:46.572 RDMA QP Service Type: 1 (Reliable Connected) 00:26:46.572 RDMA Provider Type: 1 (No provider specified) 00:26:46.572 RDMA CM Service: 1 (RDMA_CM) 00:26:46.572 Discovery Log Entry 1 00:26:46.572 ---------------------- 00:26:46.572 Transport Type: 1 (RDMA) 00:26:46.572 Address Family: 1 (IPv4) 00:26:46.572 Subsystem Type: 2 (NVM Subsystem) 00:26:46.572 Entry Flags: 00:26:46.572 Duplicate Returned Information: 0 00:26:46.572 Explicit Persistent Connection Support for Discovery: 0 00:26:46.572 Transport Requirements: 00:26:46.572 Secure Channel: Not Required 00:26:46.572 Port ID: 0 (0x0000) 00:26:46.572 Controller ID: 65535 (0xffff) 00:26:46.572 Admin Max SQ Size: [2024-12-15 16:14:15.062886] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:46.572 [2024-12-15 16:14:15.062900] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65202 doesn't match qid 00:26:46.572 [2024-12-15 16:14:15.062913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32691 cdw0:5 sqhd:79b0 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.062920] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65202 doesn't match qid 00:26:46.572 [2024-12-15 16:14:15.062929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32691 cdw0:5 sqhd:79b0 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.062935] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65202 doesn't match qid 00:26:46.572 [2024-12-15 16:14:15.062943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32691 cdw0:5 sqhd:79b0 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.062949] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 65202 doesn't match qid 00:26:46.572 [2024-12-15 16:14:15.062958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32691 cdw0:5 sqhd:79b0 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.062967] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.062975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.062992] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.572 [2024-12-15 16:14:15.062999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.063007] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.063021] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063037] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.572 [2024-12-15 16:14:15.063043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.063049] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:46.572 [2024-12-15 16:14:15.063056] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:46.572 [2024-12-15 16:14:15.063062] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063070] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.063096] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.572 [2024-12-15 16:14:15.063102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.063109] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063118] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.063143] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.572 [2024-12-15 16:14:15.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.063156] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063166] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.063192] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.572 [2024-12-15 16:14:15.063197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:46.572 [2024-12-15 16:14:15.063204] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063213] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.572 [2024-12-15 16:14:15.063221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.572 [2024-12-15 16:14:15.063241] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063253] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063261] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063285] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063297] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063306] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063337] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063349] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063358] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063387] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063399] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063408] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063433] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063446] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063455] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063485] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063496] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063505] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063529] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063541] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063549] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063577] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063588] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063597] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063620] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063633] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063641] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063669] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063680] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063693] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063721] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063734] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063743] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063767] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063779] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063788] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063811] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063823] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063832] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063863] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063875] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063883] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063908] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063920] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063929] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.063954] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.063960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.063966] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063975] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.063983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064002] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064016] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064024] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064050] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064061] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064070] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064097] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064109] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064118] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064149] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064160] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064169] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064196] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064208] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064217] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064240] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064252] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064260] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064286] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064299] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064308] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064337] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.573 [2024-12-15 16:14:15.064342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:46.573 [2024-12-15 16:14:15.064349] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064357] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.573 [2024-12-15 16:14:15.064365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.573 [2024-12-15 16:14:15.064386] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064398] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064407] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064434] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064446] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064454] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064479] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064492] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064500] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064524] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064536] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064544] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064569] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064581] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064590] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064618] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064629] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064638] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064669] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064681] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064695] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064724] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064735] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064744] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064766] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064778] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064786] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064814] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064825] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064834] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064861] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064873] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064882] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064905] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064917] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064925] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064953] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.064958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.064965] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064973] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.064981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.064999] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065011] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065019] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065042] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065054] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065063] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065086] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065098] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065106] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065136] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065148] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065157] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065182] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065194] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065203] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065228] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065240] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065249] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065272] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065284] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065293] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065316] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065328] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065337] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065368] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065379] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065388] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065413] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065425] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065463] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065475] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065483] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065509] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065521] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065529] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065553] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065564] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065573] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065598] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065610] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065619] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.065642] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.065648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.065654] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065664] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.065672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.069691] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.069699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.069705] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.069714] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.069722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.574 [2024-12-15 16:14:15.069740] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.574 [2024-12-15 16:14:15.069745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:26:46.574 [2024-12-15 16:14:15.069751] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.574 [2024-12-15 16:14:15.069758] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:26:46.574 128 00:26:46.574 Transport Service Identifier: 4420 00:26:46.574 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:46.574 Transport Address: 192.168.100.8 00:26:46.574 Transport Specific Address Subtype - RDMA 00:26:46.574 RDMA QP Service Type: 1 (Reliable Connected) 00:26:46.574 RDMA Provider Type: 1 (No provider specified) 00:26:46.574 RDMA CM Service: 1 (RDMA_CM) 00:26:46.575 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:46.838 [2024-12-15 16:14:15.141261] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:46.838 [2024-12-15 16:14:15.141306] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932844 ] 00:26:46.838 [2024-12-15 16:14:15.188888] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:46.838 [2024-12-15 16:14:15.188954] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:46.838 [2024-12-15 16:14:15.188971] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:46.838 [2024-12-15 16:14:15.188976] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:46.838 [2024-12-15 16:14:15.189000] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:46.838 [2024-12-15 16:14:15.199463] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:46.838 [2024-12-15 16:14:15.210065] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:46.838 [2024-12-15 16:14:15.210075] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:46.838 [2024-12-15 16:14:15.210081] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210089] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210098] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210104] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210110] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210116] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210122] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210128] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210134] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210140] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210147] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210153] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210159] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210165] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210171] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210177] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210183] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210189] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210195] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210201] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210207] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210214] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210220] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.838 [2024-12-15 16:14:15.210226] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210232] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210238] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210244] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210250] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210256] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210262] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210268] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210274] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:46.839 [2024-12-15 16:14:15.210280] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:46.839 [2024-12-15 16:14:15.210284] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:46.839 [2024-12-15 16:14:15.210299] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.210311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.214690] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.214700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.214707] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214714] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:46.839 [2024-12-15 16:14:15.214721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:46.839 [2024-12-15 16:14:15.214727] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:46.839 [2024-12-15 16:14:15.214739] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.214768] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.214774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.214781] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:46.839 [2024-12-15 16:14:15.214787] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:46.839 [2024-12-15 16:14:15.214801] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.214825] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.214831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.214837] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:46.839 [2024-12-15 16:14:15.214843] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.214858] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.214883] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.214889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.214896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.214902] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214910] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.214938] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.214944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.214950] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:46.839 [2024-12-15 16:14:15.214956] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.214962] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.214968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.215075] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:46.839 [2024-12-15 16:14:15.215080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.215089] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215118] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:46.839 [2024-12-15 16:14:15.215136] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215145] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215171] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215182] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:46.839 [2024-12-15 16:14:15.215188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215194] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215201] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:46.839 [2024-12-15 16:14:15.215210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215219] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.215264] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215280] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:46.839 [2024-12-15 16:14:15.215286] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:46.839 [2024-12-15 16:14:15.215291] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:46.839 [2024-12-15 16:14:15.215296] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:46.839 [2024-12-15 16:14:15.215302] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:46.839 [2024-12-15 16:14:15.215308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215314] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215330] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215356] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215370] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.839 [2024-12-15 16:14:15.215385] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.839 [2024-12-15 16:14:15.215398] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.839 [2024-12-15 16:14:15.215412] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.839 [2024-12-15 16:14:15.215425] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215431] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215451] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215475] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215488] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:46.839 [2024-12-15 16:14:15.215494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215500] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215509] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215525] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215553] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215616] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215632] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.215667] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215694] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:46.839 [2024-12-15 16:14:15.215705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215711] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215727] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.215772] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215794] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215812] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.215841] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215864] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215907] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:46.839 [2024-12-15 16:14:15.215913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:46.839 [2024-12-15 16:14:15.215919] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:46.839 [2024-12-15 16:14:15.215932] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.215948] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.839 [2024-12-15 16:14:15.215966] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215978] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.215984] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.215990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.215996] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216005] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.216031] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.216037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.216043] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216052] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.216079] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.216085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.216091] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216100] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.839 [2024-12-15 16:14:15.216127] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.839 [2024-12-15 16:14:15.216133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:26:46.839 [2024-12-15 16:14:15.216139] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216153] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.216169] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.216185] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x184400 00:26:46.839 [2024-12-15 16:14:15.216192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184400 00:26:46.839 [2024-12-15 16:14:15.216200] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184400 00:26:46.840 [2024-12-15 16:14:15.216216] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216233] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216239] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216257] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216265] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216277] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216283] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216298] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.840 ===================================================== 00:26:46.840 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.840 ===================================================== 00:26:46.840 Controller Capabilities/Features 00:26:46.840 ================================ 00:26:46.840 Vendor ID: 8086 00:26:46.840 Subsystem Vendor ID: 8086 00:26:46.840 Serial Number: SPDK00000000000001 00:26:46.840 Model Number: SPDK bdev Controller 00:26:46.840 Firmware Version: 24.09.1 00:26:46.840 Recommended Arb Burst: 6 00:26:46.840 IEEE OUI Identifier: e4 d2 5c 00:26:46.840 Multi-path I/O 00:26:46.840 May have multiple subsystem ports: Yes 00:26:46.840 May have multiple controllers: Yes 00:26:46.840 Associated with SR-IOV VF: No 00:26:46.840 Max Data Transfer Size: 131072 00:26:46.840 Max Number of Namespaces: 32 00:26:46.840 Max Number of I/O Queues: 127 00:26:46.840 NVMe Specification Version (VS): 1.3 00:26:46.840 NVMe Specification Version (Identify): 1.3 00:26:46.840 Maximum Queue Entries: 128 00:26:46.840 Contiguous Queues Required: Yes 00:26:46.840 Arbitration Mechanisms Supported 00:26:46.840 Weighted Round Robin: Not Supported 00:26:46.840 Vendor Specific: Not Supported 00:26:46.840 Reset Timeout: 15000 ms 00:26:46.840 Doorbell Stride: 4 bytes 00:26:46.840 NVM Subsystem Reset: Not Supported 00:26:46.840 Command Sets Supported 00:26:46.840 NVM Command Set: Supported 00:26:46.840 Boot Partition: Not Supported 00:26:46.840 Memory Page Size Minimum: 4096 bytes 00:26:46.840 Memory Page Size Maximum: 4096 bytes 00:26:46.840 Persistent Memory Region: Not Supported 00:26:46.840 Optional Asynchronous Events Supported 00:26:46.840 Namespace Attribute Notices: Supported 00:26:46.840 Firmware Activation Notices: Not Supported 00:26:46.840 ANA Change Notices: Not Supported 00:26:46.840 PLE Aggregate Log Change Notices: Not Supported 00:26:46.840 LBA Status Info Alert Notices: Not Supported 00:26:46.840 EGE Aggregate Log Change Notices: Not Supported 00:26:46.840 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.840 Zone Descriptor Change Notices: Not Supported 00:26:46.840 Discovery Log Change Notices: Not Supported 00:26:46.840 Controller Attributes 00:26:46.840 128-bit Host Identifier: Supported 00:26:46.840 Non-Operational Permissive Mode: Not Supported 00:26:46.840 NVM Sets: Not Supported 00:26:46.840 Read Recovery Levels: Not Supported 00:26:46.840 Endurance Groups: Not Supported 00:26:46.840 Predictable Latency Mode: Not Supported 00:26:46.840 Traffic Based Keep ALive: Not Supported 00:26:46.840 Namespace Granularity: Not Supported 00:26:46.840 SQ Associations: Not Supported 00:26:46.840 UUID List: Not Supported 00:26:46.840 Multi-Domain Subsystem: Not Supported 00:26:46.840 Fixed Capacity Management: Not Supported 00:26:46.840 Variable Capacity Management: Not Supported 00:26:46.840 Delete Endurance Group: Not Supported 00:26:46.840 Delete NVM Set: Not Supported 00:26:46.840 Extended LBA Formats Supported: Not Supported 00:26:46.840 Flexible Data Placement Supported: Not Supported 00:26:46.840 00:26:46.840 Controller Memory Buffer Support 00:26:46.840 ================================ 00:26:46.840 Supported: No 00:26:46.840 00:26:46.840 Persistent Memory Region Support 00:26:46.840 ================================ 00:26:46.840 Supported: No 00:26:46.840 00:26:46.840 Admin Command Set Attributes 00:26:46.840 ============================ 00:26:46.840 Security Send/Receive: Not Supported 00:26:46.840 Format NVM: Not Supported 00:26:46.840 Firmware Activate/Download: Not Supported 00:26:46.840 Namespace Management: Not Supported 00:26:46.840 Device Self-Test: Not Supported 00:26:46.840 Directives: Not Supported 00:26:46.840 NVMe-MI: Not Supported 00:26:46.840 Virtualization Management: Not Supported 00:26:46.840 Doorbell Buffer Config: Not Supported 00:26:46.840 Get LBA Status Capability: Not Supported 00:26:46.840 Command & Feature Lockdown Capability: Not Supported 00:26:46.840 Abort Command Limit: 4 00:26:46.840 Async Event Request Limit: 4 00:26:46.840 Number of Firmware Slots: N/A 00:26:46.840 Firmware Slot 1 Read-Only: N/A 00:26:46.840 Firmware Activation Without Reset: N/A 00:26:46.840 Multiple Update Detection Support: N/A 00:26:46.840 Firmware Update Granularity: No Information Provided 00:26:46.840 Per-Namespace SMART Log: No 00:26:46.840 Asymmetric Namespace Access Log Page: Not Supported 00:26:46.840 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:46.840 Command Effects Log Page: Supported 00:26:46.840 Get Log Page Extended Data: Supported 00:26:46.840 Telemetry Log Pages: Not Supported 00:26:46.840 Persistent Event Log Pages: Not Supported 00:26:46.840 Supported Log Pages Log Page: May Support 00:26:46.840 Commands Supported & Effects Log Page: Not Supported 00:26:46.840 Feature Identifiers & Effects Log Page:May Support 00:26:46.840 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.840 Data Area 4 for Telemetry Log: Not Supported 00:26:46.840 Error Log Page Entries Supported: 128 00:26:46.840 Keep Alive: Supported 00:26:46.840 Keep Alive Granularity: 10000 ms 00:26:46.840 00:26:46.840 NVM Command Set Attributes 00:26:46.840 ========================== 00:26:46.840 Submission Queue Entry Size 00:26:46.840 Max: 64 00:26:46.840 Min: 64 00:26:46.840 Completion Queue Entry Size 00:26:46.840 Max: 16 00:26:46.840 Min: 16 00:26:46.840 Number of Namespaces: 32 00:26:46.840 Compare Command: Supported 00:26:46.840 Write Uncorrectable Command: Not Supported 00:26:46.840 Dataset Management Command: Supported 00:26:46.840 Write Zeroes Command: Supported 00:26:46.840 Set Features Save Field: Not Supported 00:26:46.840 Reservations: Supported 00:26:46.840 Timestamp: Not Supported 00:26:46.840 Copy: Supported 00:26:46.840 Volatile Write Cache: Present 00:26:46.840 Atomic Write Unit (Normal): 1 00:26:46.840 Atomic Write Unit (PFail): 1 00:26:46.840 Atomic Compare & Write Unit: 1 00:26:46.840 Fused Compare & Write: Supported 00:26:46.840 Scatter-Gather List 00:26:46.840 SGL Command Set: Supported 00:26:46.840 SGL Keyed: Supported 00:26:46.840 SGL Bit Bucket Descriptor: Not Supported 00:26:46.840 SGL Metadata Pointer: Not Supported 00:26:46.840 Oversized SGL: Not Supported 00:26:46.840 SGL Metadata Address: Not Supported 00:26:46.840 SGL Offset: Supported 00:26:46.840 Transport SGL Data Block: Not Supported 00:26:46.840 Replay Protected Memory Block: Not Supported 00:26:46.840 00:26:46.840 Firmware Slot Information 00:26:46.840 ========================= 00:26:46.840 Active slot: 1 00:26:46.840 Slot 1 Firmware Revision: 24.09.1 00:26:46.840 00:26:46.840 00:26:46.840 Commands Supported and Effects 00:26:46.840 ============================== 00:26:46.840 Admin Commands 00:26:46.840 -------------- 00:26:46.840 Get Log Page (02h): Supported 00:26:46.840 Identify (06h): Supported 00:26:46.840 Abort (08h): Supported 00:26:46.840 Set Features (09h): Supported 00:26:46.840 Get Features (0Ah): Supported 00:26:46.840 Asynchronous Event Request (0Ch): Supported 00:26:46.840 Keep Alive (18h): Supported 00:26:46.840 I/O Commands 00:26:46.840 ------------ 00:26:46.840 Flush (00h): Supported LBA-Change 00:26:46.840 Write (01h): Supported LBA-Change 00:26:46.840 Read (02h): Supported 00:26:46.840 Compare (05h): Supported 00:26:46.840 Write Zeroes (08h): Supported LBA-Change 00:26:46.840 Dataset Management (09h): Supported LBA-Change 00:26:46.840 Copy (19h): Supported LBA-Change 00:26:46.840 00:26:46.840 Error Log 00:26:46.840 ========= 00:26:46.840 00:26:46.840 Arbitration 00:26:46.840 =========== 00:26:46.840 Arbitration Burst: 1 00:26:46.840 00:26:46.840 Power Management 00:26:46.840 ================ 00:26:46.840 Number of Power States: 1 00:26:46.840 Current Power State: Power State #0 00:26:46.840 Power State #0: 00:26:46.840 Max Power: 0.00 W 00:26:46.840 Non-Operational State: Operational 00:26:46.840 Entry Latency: Not Reported 00:26:46.840 Exit Latency: Not Reported 00:26:46.840 Relative Read Throughput: 0 00:26:46.840 Relative Read Latency: 0 00:26:46.840 Relative Write Throughput: 0 00:26:46.840 Relative Write Latency: 0 00:26:46.840 Idle Power: Not Reported 00:26:46.840 Active Power: Not Reported 00:26:46.840 Non-Operational Permissive Mode: Not Supported 00:26:46.840 00:26:46.840 Health Information 00:26:46.840 ================== 00:26:46.840 Critical Warnings: 00:26:46.840 Available Spare Space: OK 00:26:46.840 Temperature: OK 00:26:46.840 Device Reliability: OK 00:26:46.840 Read Only: No 00:26:46.840 Volatile Memory Backup: OK 00:26:46.840 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:46.840 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:46.840 Available Spare: 0% 00:26:46.840 Available Spare Threshold: 0% 00:26:46.840 Life Percent[2024-12-15 16:14:15.216373] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216403] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216415] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216441] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:46.840 [2024-12-15 16:14:15.216451] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23583 doesn't match qid 00:26:46.840 [2024-12-15 16:14:15.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32543 cdw0:5 sqhd:b9b0 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216470] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23583 doesn't match qid 00:26:46.840 [2024-12-15 16:14:15.216479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32543 cdw0:5 sqhd:b9b0 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216485] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23583 doesn't match qid 00:26:46.840 [2024-12-15 16:14:15.216493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32543 cdw0:5 sqhd:b9b0 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216499] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23583 doesn't match qid 00:26:46.840 [2024-12-15 16:14:15.216507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32543 cdw0:5 sqhd:b9b0 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216516] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216540] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216554] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216569] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216583] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216595] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:46.840 [2024-12-15 16:14:15.216603] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:46.840 [2024-12-15 16:14:15.216609] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216618] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216647] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216659] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216668] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216699] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216711] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216720] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216747] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216759] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216768] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216794] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216806] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216815] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.840 [2024-12-15 16:14:15.216823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.840 [2024-12-15 16:14:15.216843] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.840 [2024-12-15 16:14:15.216849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:46.840 [2024-12-15 16:14:15.216855] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216864] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.216889] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.216902] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216911] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.216938] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.216944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.216951] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216959] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.216967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.216985] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.216997] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217005] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217029] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217041] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217049] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217076] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217088] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217097] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217120] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217132] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217141] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217167] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217179] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217188] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217213] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217225] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217234] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217263] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217275] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217283] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217311] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217322] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217331] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217360] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217372] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217380] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217404] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217416] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217424] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217449] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217461] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217469] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217493] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217505] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217514] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217541] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217553] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217561] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217590] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217602] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217611] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217642] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217654] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217662] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217693] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217706] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217714] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217743] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217755] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217763] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217789] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217800] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217809] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217834] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217846] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217855] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217878] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.841 [2024-12-15 16:14:15.217884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:46.841 [2024-12-15 16:14:15.217890] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217899] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.841 [2024-12-15 16:14:15.217906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.841 [2024-12-15 16:14:15.217926] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.217931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.217938] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.217946] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.217954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.217973] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.217979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.217985] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.217994] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218022] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218034] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218043] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218068] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218080] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218089] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218112] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218124] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218133] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218156] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218168] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218176] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218200] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218212] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218220] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218250] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218261] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218271] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218299] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218310] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218319] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218346] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218358] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218367] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218390] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218402] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218411] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218438] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218449] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218458] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218487] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218499] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218508] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218539] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218550] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218560] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218588] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218599] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218608] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218629] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.218635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.218641] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218650] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.218657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.218679] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.222690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.222700] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.222709] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.222717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:46.842 [2024-12-15 16:14:15.222741] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:46.842 [2024-12-15 16:14:15.222747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:26:46.842 [2024-12-15 16:14:15.222753] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x184400 00:26:46.842 [2024-12-15 16:14:15.222760] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:46.842 age Used: 0% 00:26:46.842 Data Units Read: 0 00:26:46.842 Data Units Written: 0 00:26:46.842 Host Read Commands: 0 00:26:46.842 Host Write Commands: 0 00:26:46.842 Controller Busy Time: 0 minutes 00:26:46.842 Power Cycles: 0 00:26:46.842 Power On Hours: 0 hours 00:26:46.842 Unsafe Shutdowns: 0 00:26:46.842 Unrecoverable Media Errors: 0 00:26:46.842 Lifetime Error Log Entries: 0 00:26:46.842 Warning Temperature Time: 0 minutes 00:26:46.842 Critical Temperature Time: 0 minutes 00:26:46.842 00:26:46.842 Number of Queues 00:26:46.842 ================ 00:26:46.842 Number of I/O Submission Queues: 127 00:26:46.842 Number of I/O Completion Queues: 127 00:26:46.842 00:26:46.842 Active Namespaces 00:26:46.842 ================= 00:26:46.842 Namespace ID:1 00:26:46.842 Error Recovery Timeout: Unlimited 00:26:46.842 Command Set Identifier: NVM (00h) 00:26:46.842 Deallocate: Supported 00:26:46.842 Deallocated/Unwritten Error: Not Supported 00:26:46.842 Deallocated Read Value: Unknown 00:26:46.842 Deallocate in Write Zeroes: Not Supported 00:26:46.842 Deallocated Guard Field: 0xFFFF 00:26:46.842 Flush: Supported 00:26:46.842 Reservation: Supported 00:26:46.842 Namespace Sharing Capabilities: Multiple Controllers 00:26:46.842 Size (in LBAs): 131072 (0GiB) 00:26:46.842 Capacity (in LBAs): 131072 (0GiB) 00:26:46.842 Utilization (in LBAs): 131072 (0GiB) 00:26:46.842 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:46.842 EUI64: ABCDEF0123456789 00:26:46.842 UUID: 285dac30-286b-449e-a1eb-5570a9d0be6b 00:26:46.842 Thin Provisioning: Not Supported 00:26:46.842 Per-NS Atomic Units: Yes 00:26:46.842 Atomic Boundary Size (Normal): 0 00:26:46.842 Atomic Boundary Size (PFail): 0 00:26:46.842 Atomic Boundary Offset: 0 00:26:46.842 Maximum Single Source Range Length: 65535 00:26:46.842 Maximum Copy Length: 65535 00:26:46.842 Maximum Source Range Count: 1 00:26:46.842 NGUID/EUI64 Never Reused: No 00:26:46.842 Namespace Write Protected: No 00:26:46.842 Number of LBA Formats: 1 00:26:46.842 Current LBA Format: LBA Format #00 00:26:46.842 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:46.842 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:46.842 rmmod nvme_rdma 00:26:46.842 rmmod nvme_fabrics 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 2932662 ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 2932662 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2932662 ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2932662 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2932662 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2932662' 00:26:46.842 killing process with pid 2932662 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2932662 00:26:46.842 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2932662 00:26:47.102 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:47.102 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:26:47.360 00:26:47.360 real 0m7.929s 00:26:47.360 user 0m6.049s 00:26:47.360 sys 0m5.436s 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:47.360 ************************************ 00:26:47.360 END TEST nvmf_identify 00:26:47.360 ************************************ 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.360 ************************************ 00:26:47.360 START TEST nvmf_perf 00:26:47.360 ************************************ 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:47.360 * Looking for test storage... 00:26:47.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:26:47.360 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.621 --rc genhtml_branch_coverage=1 00:26:47.621 --rc genhtml_function_coverage=1 00:26:47.621 --rc genhtml_legend=1 00:26:47.621 --rc geninfo_all_blocks=1 00:26:47.621 --rc geninfo_unexecuted_blocks=1 00:26:47.621 00:26:47.621 ' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.621 --rc genhtml_branch_coverage=1 00:26:47.621 --rc genhtml_function_coverage=1 00:26:47.621 --rc genhtml_legend=1 00:26:47.621 --rc geninfo_all_blocks=1 00:26:47.621 --rc geninfo_unexecuted_blocks=1 00:26:47.621 00:26:47.621 ' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.621 --rc genhtml_branch_coverage=1 00:26:47.621 --rc genhtml_function_coverage=1 00:26:47.621 --rc genhtml_legend=1 00:26:47.621 --rc geninfo_all_blocks=1 00:26:47.621 --rc geninfo_unexecuted_blocks=1 00:26:47.621 00:26:47.621 ' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.621 --rc genhtml_branch_coverage=1 00:26:47.621 --rc genhtml_function_coverage=1 00:26:47.621 --rc genhtml_legend=1 00:26:47.621 --rc geninfo_all_blocks=1 00:26:47.621 --rc geninfo_unexecuted_blocks=1 00:26:47.621 00:26:47.621 ' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:47.621 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.622 16:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:54.197 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:54.197 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:54.197 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:54.197 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # rdma_device_init 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:54.197 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:54.198 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:54.198 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:54.198 altname enp217s0f0np0 00:26:54.198 altname ens818f0np0 00:26:54.198 inet 192.168.100.8/24 scope global mlx_0_0 00:26:54.198 valid_lft forever preferred_lft forever 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:54.198 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:54.198 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:54.198 altname enp217s0f1np1 00:26:54.198 altname ens818f1np1 00:26:54.198 inet 192.168.100.9/24 scope global mlx_0_1 00:26:54.198 valid_lft forever preferred_lft forever 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:54.198 192.168.100.9' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:54.198 192.168.100.9' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # head -n 1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:54.198 192.168.100.9' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # tail -n +2 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # head -n 1 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:54.198 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=2936134 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 2936134 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2936134 ']' 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.458 16:14:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:54.458 [2024-12-15 16:14:22.853324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:54.458 [2024-12-15 16:14:22.853382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.458 [2024-12-15 16:14:22.924152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.458 [2024-12-15 16:14:22.965616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.458 [2024-12-15 16:14:22.965654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.458 [2024-12-15 16:14:22.965664] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.458 [2024-12-15 16:14:22.965672] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.458 [2024-12-15 16:14:22.965679] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.458 [2024-12-15 16:14:22.965732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.458 [2024-12-15 16:14:22.965830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.458 [2024-12-15 16:14:22.965913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.458 [2024-12-15 16:14:22.965915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:54.717 16:14:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:58.009 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:58.009 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:58.009 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:26:58.009 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:58.268 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:58.268 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:26:58.268 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:58.268 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:26:58.268 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:26:58.268 [2024-12-15 16:14:26.774839] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:26:58.268 [2024-12-15 16:14:26.797351] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf976a0/0xe6dc30) succeed. 00:26:58.268 [2024-12-15 16:14:26.808034] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf98b50/0xeed8c0) succeed. 00:26:58.528 16:14:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.787 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:58.787 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:58.787 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:58.787 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:59.046 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:59.305 [2024-12-15 16:14:27.675680] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:59.305 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:59.565 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:26:59.565 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:26:59.565 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:59.565 16:14:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:00.944 Initializing NVMe Controllers 00:27:00.944 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:27:00.944 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:27:00.944 Initialization complete. Launching workers. 00:27:00.944 ======================================================== 00:27:00.944 Latency(us) 00:27:00.944 Device Information : IOPS MiB/s Average min max 00:27:00.944 PCIE (0000:d8:00.0) NSID 1 from core 0: 102200.75 399.22 312.56 34.07 6203.93 00:27:00.944 ======================================================== 00:27:00.944 Total : 102200.75 399.22 312.56 34.07 6203.93 00:27:00.944 00:27:00.944 16:14:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:04.236 Initializing NVMe Controllers 00:27:04.236 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:04.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:04.236 Initialization complete. Launching workers. 00:27:04.236 ======================================================== 00:27:04.236 Latency(us) 00:27:04.236 Device Information : IOPS MiB/s Average min max 00:27:04.236 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6715.17 26.23 148.57 49.13 5037.07 00:27:04.236 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5212.51 20.36 191.48 65.94 5074.24 00:27:04.236 ======================================================== 00:27:04.236 Total : 11927.68 46.59 167.32 49.13 5074.24 00:27:04.236 00:27:04.236 16:14:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:07.529 Initializing NVMe Controllers 00:27:07.529 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.529 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:07.529 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:07.529 Initialization complete. Launching workers. 00:27:07.529 ======================================================== 00:27:07.529 Latency(us) 00:27:07.529 Device Information : IOPS MiB/s Average min max 00:27:07.529 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18497.00 72.25 1723.20 488.64 7035.12 00:27:07.529 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.00 15.69 7977.10 5231.62 14822.19 00:27:07.529 ======================================================== 00:27:07.529 Total : 22514.00 87.95 2839.03 488.64 14822.19 00:27:07.529 00:27:07.529 16:14:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:27:07.529 16:14:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:12.806 Initializing NVMe Controllers 00:27:12.806 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.807 Controller IO queue size 128, less than required. 00:27:12.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:12.807 Controller IO queue size 128, less than required. 00:27:12.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:12.807 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:12.807 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:12.807 Initialization complete. Launching workers. 00:27:12.807 ======================================================== 00:27:12.807 Latency(us) 00:27:12.807 Device Information : IOPS MiB/s Average min max 00:27:12.807 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3989.50 997.37 32207.15 14037.55 88491.81 00:27:12.807 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4030.50 1007.62 31503.15 12905.32 78495.73 00:27:12.807 ======================================================== 00:27:12.807 Total : 8020.00 2005.00 31853.35 12905.32 88491.81 00:27:12.807 00:27:12.807 16:14:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:27:12.807 No valid NVMe controllers or AIO or URING devices found 00:27:12.807 Initializing NVMe Controllers 00:27:12.807 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.807 Controller IO queue size 128, less than required. 00:27:12.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:12.807 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:12.807 Controller IO queue size 128, less than required. 00:27:12.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:12.807 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:12.807 WARNING: Some requested NVMe devices were skipped 00:27:12.807 16:14:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:27:17.002 Initializing NVMe Controllers 00:27:17.003 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:17.003 Controller IO queue size 128, less than required. 00:27:17.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:17.003 Controller IO queue size 128, less than required. 00:27:17.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:17.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:17.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:17.003 Initialization complete. Launching workers. 00:27:17.003 00:27:17.003 ==================== 00:27:17.003 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:17.003 RDMA transport: 00:27:17.003 dev name: mlx5_0 00:27:17.003 polls: 407201 00:27:17.003 idle_polls: 403494 00:27:17.003 completions: 45146 00:27:17.003 queued_requests: 1 00:27:17.003 total_send_wrs: 22573 00:27:17.003 send_doorbell_updates: 3465 00:27:17.003 total_recv_wrs: 22700 00:27:17.003 recv_doorbell_updates: 3468 00:27:17.003 --------------------------------- 00:27:17.003 00:27:17.003 ==================== 00:27:17.003 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:17.003 RDMA transport: 00:27:17.003 dev name: mlx5_0 00:27:17.003 polls: 410402 00:27:17.003 idle_polls: 410137 00:27:17.003 completions: 20202 00:27:17.003 queued_requests: 1 00:27:17.003 total_send_wrs: 10101 00:27:17.003 send_doorbell_updates: 247 00:27:17.003 total_recv_wrs: 10228 00:27:17.003 recv_doorbell_updates: 249 00:27:17.003 --------------------------------- 00:27:17.003 ======================================================== 00:27:17.003 Latency(us) 00:27:17.003 Device Information : IOPS MiB/s Average min max 00:27:17.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5641.91 1410.48 22634.54 11256.69 73119.42 00:27:17.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2524.51 631.13 50426.95 30013.77 75721.93 00:27:17.003 ======================================================== 00:27:17.003 Total : 8166.43 2041.61 31226.10 11256.69 75721.93 00:27:17.003 00:27:17.003 16:14:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:17.003 16:14:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.003 16:14:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:17.003 16:14:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:27:17.003 16:14:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=8265bc94-a0bb-4417-8e8a-701c47c78fde 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 8265bc94-a0bb-4417-8e8a-701c47c78fde 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=8265bc94-a0bb-4417-8e8a-701c47c78fde 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:23.574 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:23.574 { 00:27:23.574 "uuid": "8265bc94-a0bb-4417-8e8a-701c47c78fde", 00:27:23.574 "name": "lvs_0", 00:27:23.574 "base_bdev": "Nvme0n1", 00:27:23.574 "total_data_clusters": 476466, 00:27:23.574 "free_clusters": 476466, 00:27:23.574 "block_size": 512, 00:27:23.574 "cluster_size": 4194304 00:27:23.574 } 00:27:23.574 ]' 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8265bc94-a0bb-4417-8e8a-701c47c78fde") .free_clusters' 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=476466 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8265bc94-a0bb-4417-8e8a-701c47c78fde") .cluster_size' 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1905864 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1905864 00:27:23.575 1905864 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:23.575 16:14:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8265bc94-a0bb-4417-8e8a-701c47c78fde lbd_0 20480 00:27:23.841 16:14:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ad72a5ca-ba3f-42b0-96a3-4ac1ad58756a 00:27:23.841 16:14:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ad72a5ca-ba3f-42b0-96a3-4ac1ad58756a lvs_n_0 00:27:25.748 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=252500e0-4625-4617-8d35-5506d95fe571 00:27:25.748 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 252500e0-4625-4617-8d35-5506d95fe571 00:27:25.748 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=252500e0-4625-4617-8d35-5506d95fe571 00:27:25.748 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:25.748 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:25.749 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:25.749 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:26.008 { 00:27:26.008 "uuid": "8265bc94-a0bb-4417-8e8a-701c47c78fde", 00:27:26.008 "name": "lvs_0", 00:27:26.008 "base_bdev": "Nvme0n1", 00:27:26.008 "total_data_clusters": 476466, 00:27:26.008 "free_clusters": 471346, 00:27:26.008 "block_size": 512, 00:27:26.008 "cluster_size": 4194304 00:27:26.008 }, 00:27:26.008 { 00:27:26.008 "uuid": "252500e0-4625-4617-8d35-5506d95fe571", 00:27:26.008 "name": "lvs_n_0", 00:27:26.008 "base_bdev": "ad72a5ca-ba3f-42b0-96a3-4ac1ad58756a", 00:27:26.008 "total_data_clusters": 5114, 00:27:26.008 "free_clusters": 5114, 00:27:26.008 "block_size": 512, 00:27:26.008 "cluster_size": 4194304 00:27:26.008 } 00:27:26.008 ]' 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="252500e0-4625-4617-8d35-5506d95fe571") .free_clusters' 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="252500e0-4625-4617-8d35-5506d95fe571") .cluster_size' 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:26.008 20456 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:26.008 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 252500e0-4625-4617-8d35-5506d95fe571 lbd_nest_0 20456 00:27:26.267 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a135eccd-5fa6-4622-aed3-4384dc67335b 00:27:26.267 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.526 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:26.526 16:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a135eccd-5fa6-4622-aed3-4384dc67335b 00:27:26.526 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:26.785 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:26.785 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:26.785 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:26.785 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:26.785 16:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:38.998 Initializing NVMe Controllers 00:27:38.998 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.998 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.998 Initialization complete. Launching workers. 00:27:38.998 ======================================================== 00:27:38.998 Latency(us) 00:27:38.998 Device Information : IOPS MiB/s Average min max 00:27:38.998 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5820.10 2.84 171.03 68.61 8071.83 00:27:38.998 ======================================================== 00:27:38.998 Total : 5820.10 2.84 171.03 68.61 8071.83 00:27:38.998 00:27:38.998 16:15:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:38.998 16:15:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:51.296 Initializing NVMe Controllers 00:27:51.296 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.296 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:51.296 Initialization complete. Launching workers. 00:27:51.296 ======================================================== 00:27:51.296 Latency(us) 00:27:51.296 Device Information : IOPS MiB/s Average min max 00:27:51.296 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2667.30 333.41 374.67 154.98 5102.29 00:27:51.296 ======================================================== 00:27:51.296 Total : 2667.30 333.41 374.67 154.98 5102.29 00:27:51.296 00:27:51.296 16:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:51.296 16:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:51.296 16:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:01.280 Initializing NVMe Controllers 00:28:01.280 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.280 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:01.280 Initialization complete. Launching workers. 00:28:01.280 ======================================================== 00:28:01.280 Latency(us) 00:28:01.280 Device Information : IOPS MiB/s Average min max 00:28:01.280 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11388.00 5.56 2808.66 893.55 9510.75 00:28:01.280 ======================================================== 00:28:01.280 Total : 11388.00 5.56 2808.66 893.55 9510.75 00:28:01.280 00:28:01.280 16:15:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:01.280 16:15:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:13.492 Initializing NVMe Controllers 00:28:13.492 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.492 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.492 Initialization complete. Launching workers. 00:28:13.492 ======================================================== 00:28:13.492 Latency(us) 00:28:13.492 Device Information : IOPS MiB/s Average min max 00:28:13.492 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3980.50 497.56 8044.39 4926.98 16027.85 00:28:13.492 ======================================================== 00:28:13.492 Total : 3980.50 497.56 8044.39 4926.98 16027.85 00:28:13.492 00:28:13.492 16:15:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:13.492 16:15:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:13.492 16:15:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:25.708 Initializing NVMe Controllers 00:28:25.708 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.708 Controller IO queue size 128, less than required. 00:28:25.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.708 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.708 Initialization complete. Launching workers. 00:28:25.708 ======================================================== 00:28:25.708 Latency(us) 00:28:25.708 Device Information : IOPS MiB/s Average min max 00:28:25.708 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18827.38 9.19 6800.66 1822.23 16911.86 00:28:25.708 ======================================================== 00:28:25.708 Total : 18827.38 9.19 6800.66 1822.23 16911.86 00:28:25.708 00:28:25.708 16:15:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:25.708 16:15:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:35.701 Initializing NVMe Controllers 00:28:35.701 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.701 Controller IO queue size 128, less than required. 00:28:35.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.701 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.701 Initialization complete. Launching workers. 00:28:35.701 ======================================================== 00:28:35.701 Latency(us) 00:28:35.701 Device Information : IOPS MiB/s Average min max 00:28:35.701 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11087.54 1385.94 11538.52 3375.11 23670.59 00:28:35.701 ======================================================== 00:28:35.701 Total : 11087.54 1385.94 11538.52 3375.11 23670.59 00:28:35.701 00:28:35.701 16:16:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.701 16:16:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a135eccd-5fa6-4622-aed3-4384dc67335b 00:28:35.960 16:16:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:35.960 16:16:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad72a5ca-ba3f-42b0-96a3-4ac1ad58756a 00:28:36.528 16:16:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:36.528 16:16:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:36.528 rmmod nvme_rdma 00:28:36.528 rmmod nvme_fabrics 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 2936134 ']' 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 2936134 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2936134 ']' 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2936134 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.528 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2936134 00:28:36.787 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:36.787 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:36.787 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2936134' 00:28:36.787 killing process with pid 2936134 00:28:36.787 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2936134 00:28:36.787 16:16:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2936134 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:28:39.325 00:28:39.325 real 1m51.776s 00:28:39.325 user 7m1.899s 00:28:39.325 sys 0m7.316s 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:39.325 ************************************ 00:28:39.325 END TEST nvmf_perf 00:28:39.325 ************************************ 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:39.325 16:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.326 ************************************ 00:28:39.326 START TEST nvmf_fio_host 00:28:39.326 ************************************ 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:39.326 * Looking for test storage... 00:28:39.326 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.326 --rc genhtml_branch_coverage=1 00:28:39.326 --rc genhtml_function_coverage=1 00:28:39.326 --rc genhtml_legend=1 00:28:39.326 --rc geninfo_all_blocks=1 00:28:39.326 --rc geninfo_unexecuted_blocks=1 00:28:39.326 00:28:39.326 ' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.326 --rc genhtml_branch_coverage=1 00:28:39.326 --rc genhtml_function_coverage=1 00:28:39.326 --rc genhtml_legend=1 00:28:39.326 --rc geninfo_all_blocks=1 00:28:39.326 --rc geninfo_unexecuted_blocks=1 00:28:39.326 00:28:39.326 ' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.326 --rc genhtml_branch_coverage=1 00:28:39.326 --rc genhtml_function_coverage=1 00:28:39.326 --rc genhtml_legend=1 00:28:39.326 --rc geninfo_all_blocks=1 00:28:39.326 --rc geninfo_unexecuted_blocks=1 00:28:39.326 00:28:39.326 ' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:39.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.326 --rc genhtml_branch_coverage=1 00:28:39.326 --rc genhtml_function_coverage=1 00:28:39.326 --rc genhtml_legend=1 00:28:39.326 --rc geninfo_all_blocks=1 00:28:39.326 --rc geninfo_unexecuted_blocks=1 00:28:39.326 00:28:39.326 ' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.326 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.327 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.327 16:16:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:45.902 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:45.902 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:28:45.902 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:45.903 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:45.903 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # rdma_device_init 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@526 -- # allocate_nic_ips 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:45.903 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:46.163 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:46.163 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:46.163 altname enp217s0f0np0 00:28:46.163 altname ens818f0np0 00:28:46.163 inet 192.168.100.8/24 scope global mlx_0_0 00:28:46.163 valid_lft forever preferred_lft forever 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:46.163 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:46.163 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:46.163 altname enp217s0f1np1 00:28:46.163 altname ens818f1np1 00:28:46.163 inet 192.168.100.9/24 scope global mlx_0_1 00:28:46.163 valid_lft forever preferred_lft forever 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:28:46.163 192.168.100.9' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:28:46.163 192.168.100.9' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # head -n 1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:28:46.163 192.168.100.9' 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # tail -n +2 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # head -n 1 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:46.163 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2957373 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2957373 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2957373 ']' 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.164 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:46.164 [2024-12-15 16:16:14.660650] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:46.164 [2024-12-15 16:16:14.660704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.164 [2024-12-15 16:16:14.729611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.423 [2024-12-15 16:16:14.769602] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.423 [2024-12-15 16:16:14.769641] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.423 [2024-12-15 16:16:14.769651] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.423 [2024-12-15 16:16:14.769662] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.423 [2024-12-15 16:16:14.769669] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.423 [2024-12-15 16:16:14.769720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.423 [2024-12-15 16:16:14.769816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.423 [2024-12-15 16:16:14.769899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.423 [2024-12-15 16:16:14.769901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.423 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.423 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:46.423 16:16:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:46.696 [2024-12-15 16:16:15.056176] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5ace40/0x5b1330) succeed. 00:28:46.696 [2024-12-15 16:16:15.066983] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5ae480/0x5f29d0) succeed. 00:28:46.696 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:46.696 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.696 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.696 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:46.957 Malloc1 00:28:46.957 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.216 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:47.475 16:16:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:47.735 [2024-12-15 16:16:16.046675] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:47.735 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:48.015 16:16:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:48.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:48.284 fio-3.35 00:28:48.284 Starting 1 thread 00:28:50.812 00:28:50.812 test: (groupid=0, jobs=1): err= 0: pid=2957806: Sun Dec 15 16:16:18 2024 00:28:50.812 read: IOPS=18.2k, BW=71.0MiB/s (74.4MB/s)(142MiB/2004msec) 00:28:50.812 slat (nsec): min=1344, max=35339, avg=1476.46, stdev=400.57 00:28:50.812 clat (usec): min=1704, max=6440, avg=3496.96, stdev=80.20 00:28:50.812 lat (usec): min=1718, max=6442, avg=3498.43, stdev=80.09 00:28:50.812 clat percentiles (usec): 00:28:50.812 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:28:50.812 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3490], 00:28:50.812 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3523], 00:28:50.812 | 99.00th=[ 3523], 99.50th=[ 3654], 99.90th=[ 5014], 99.95th=[ 5538], 00:28:50.812 | 99.99th=[ 6063] 00:28:50.812 bw ( KiB/s): min=71256, max=73616, per=100.00%, avg=72698.00, stdev=1013.98, samples=4 00:28:50.812 iops : min=17814, max=18404, avg=18174.50, stdev=253.49, samples=4 00:28:50.812 write: IOPS=18.2k, BW=71.0MiB/s (74.5MB/s)(142MiB/2004msec); 0 zone resets 00:28:50.812 slat (nsec): min=1383, max=17830, avg=1559.65, stdev=405.12 00:28:50.812 clat (usec): min=2452, max=6446, avg=3495.07, stdev=73.89 00:28:50.812 lat (usec): min=2462, max=6448, avg=3496.63, stdev=73.80 00:28:50.812 clat percentiles (usec): 00:28:50.812 | 1.00th=[ 3458], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3490], 00:28:50.812 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3490], 00:28:50.812 | 70.00th=[ 3490], 80.00th=[ 3490], 90.00th=[ 3523], 95.00th=[ 3523], 00:28:50.812 | 99.00th=[ 3523], 99.50th=[ 3654], 99.90th=[ 4228], 99.95th=[ 5538], 00:28:50.812 | 99.99th=[ 6063] 00:28:50.812 bw ( KiB/s): min=71320, max=73360, per=100.00%, avg=72772.00, stdev=976.75, samples=4 00:28:50.812 iops : min=17830, max=18340, avg=18193.00, stdev=244.19, samples=4 00:28:50.812 lat (msec) : 2=0.01%, 4=99.87%, 10=0.13% 00:28:50.812 cpu : usr=99.50%, sys=0.05%, ctx=15, majf=0, minf=2 00:28:50.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:50.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:50.812 issued rwts: total=36413,36449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:50.812 00:28:50.812 Run status group 0 (all jobs): 00:28:50.812 READ: bw=71.0MiB/s (74.4MB/s), 71.0MiB/s-71.0MiB/s (74.4MB/s-74.4MB/s), io=142MiB (149MB), run=2004-2004msec 00:28:50.812 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=142MiB (149MB), run=2004-2004msec 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.812 16:16:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:50.812 16:16:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:50.812 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:50.812 fio-3.35 00:28:50.812 Starting 1 thread 00:28:53.340 00:28:53.340 test: (groupid=0, jobs=1): err= 0: pid=2958460: Sun Dec 15 16:16:21 2024 00:28:53.340 read: IOPS=14.6k, BW=229MiB/s (240MB/s)(450MiB/1967msec) 00:28:53.340 slat (nsec): min=2251, max=42119, avg=2552.38, stdev=873.65 00:28:53.340 clat (usec): min=495, max=8040, avg=1530.47, stdev=1203.20 00:28:53.340 lat (usec): min=498, max=8056, avg=1533.02, stdev=1203.54 00:28:53.340 clat percentiles (usec): 00:28:53.340 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 898], 00:28:53.340 | 30.00th=[ 971], 40.00th=[ 1057], 50.00th=[ 1156], 60.00th=[ 1254], 00:28:53.340 | 70.00th=[ 1385], 80.00th=[ 1549], 90.00th=[ 2966], 95.00th=[ 4817], 00:28:53.340 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:28:53.340 | 99.99th=[ 8029] 00:28:53.340 bw ( KiB/s): min=107360, max=119072, per=48.71%, avg=114072.00, stdev=5581.37, samples=4 00:28:53.340 iops : min= 6710, max= 7442, avg=7129.50, stdev=348.84, samples=4 00:28:53.340 write: IOPS=8225, BW=129MiB/s (135MB/s)(231MiB/1799msec); 0 zone resets 00:28:53.340 slat (usec): min=26, max=144, avg=28.27, stdev= 4.71 00:28:53.340 clat (usec): min=4359, max=18764, avg=12615.25, stdev=1808.71 00:28:53.340 lat (usec): min=4385, max=18793, avg=12643.52, stdev=1808.42 00:28:53.340 clat percentiles (usec): 00:28:53.340 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:28:53.340 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:28:53.340 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14877], 95.00th=[15664], 00:28:53.341 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:28:53.341 | 99.99th=[18744] 00:28:53.341 bw ( KiB/s): min=112768, max=123264, per=89.95%, avg=118376.00, stdev=5212.85, samples=4 00:28:53.341 iops : min= 7048, max= 7704, avg=7398.50, stdev=325.80, samples=4 00:28:53.341 lat (usec) : 500=0.01%, 750=2.76%, 1000=19.87% 00:28:53.341 lat (msec) : 2=36.10%, 4=1.95%, 10=7.31%, 20=32.01% 00:28:53.341 cpu : usr=96.21%, sys=2.00%, ctx=205, majf=0, minf=2 00:28:53.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:53.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:53.341 issued rwts: total=28793,14797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:53.341 00:28:53.341 Run status group 0 (all jobs): 00:28:53.341 READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=450MiB (472MB), run=1967-1967msec 00:28:53.341 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=231MiB (242MB), run=1799-1799msec 00:28:53.341 16:16:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:28:53.599 16:16:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:28:56.980 Nvme0n1 00:28:56.980 16:16:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:02.243 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:02.501 { 00:29:02.501 "uuid": "b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f", 00:29:02.501 "name": "lvs_0", 00:29:02.501 "base_bdev": "Nvme0n1", 00:29:02.501 "total_data_clusters": 1862, 00:29:02.501 "free_clusters": 1862, 00:29:02.501 "block_size": 512, 00:29:02.501 "cluster_size": 1073741824 00:29:02.501 } 00:29:02.501 ]' 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f") .free_clusters' 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1862 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f") .cluster_size' 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1906688 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1906688 00:29:02.501 1906688 00:29:02.501 16:16:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:29:03.066 3b3a5a1a-3dc6-475a-9c23-9cc427bc4803 00:29:03.067 16:16:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:03.325 16:16:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:03.583 16:16:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:03.583 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.858 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.858 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.858 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:03.858 16:16:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:04.119 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:04.119 fio-3.35 00:29:04.119 Starting 1 thread 00:29:06.642 00:29:06.642 test: (groupid=0, jobs=1): err= 0: pid=2960761: Sun Dec 15 16:16:34 2024 00:29:06.642 read: IOPS=9792, BW=38.2MiB/s (40.1MB/s)(76.7MiB/2005msec) 00:29:06.642 slat (nsec): min=1424, max=23195, avg=1565.68, stdev=274.21 00:29:06.642 clat (usec): min=199, max=333308, avg=6477.59, stdev=18794.09 00:29:06.642 lat (usec): min=200, max=333311, avg=6479.15, stdev=18794.11 00:29:06.642 clat percentiles (msec): 00:29:06.642 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:06.642 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:06.642 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:06.642 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:29:06.642 | 99.99th=[ 334] 00:29:06.642 bw ( KiB/s): min=14488, max=47552, per=99.96%, avg=39154.00, stdev=16445.35, samples=4 00:29:06.642 iops : min= 3622, max=11888, avg=9788.50, stdev=4111.34, samples=4 00:29:06.642 write: IOPS=9807, BW=38.3MiB/s (40.2MB/s)(76.8MiB/2005msec); 0 zone resets 00:29:06.642 slat (nsec): min=1464, max=17113, avg=1631.56, stdev=243.47 00:29:06.642 clat (usec): min=152, max=333680, avg=6452.55, stdev=18264.69 00:29:06.642 lat (usec): min=154, max=333684, avg=6454.18, stdev=18264.73 00:29:06.642 clat percentiles (msec): 00:29:06.642 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:06.642 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:06.642 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:06.642 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:29:06.642 | 99.99th=[ 334] 00:29:06.642 bw ( KiB/s): min=15208, max=47224, per=99.89%, avg=39188.00, stdev=15986.70, samples=4 00:29:06.642 iops : min= 3802, max=11806, avg=9797.00, stdev=3996.68, samples=4 00:29:06.642 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:06.642 lat (msec) : 2=0.04%, 4=0.21%, 10=99.38%, 500=0.33% 00:29:06.642 cpu : usr=99.40%, sys=0.25%, ctx=32, majf=0, minf=2 00:29:06.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:06.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:06.642 issued rwts: total=19633,19665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:06.642 00:29:06.642 Run status group 0 (all jobs): 00:29:06.642 READ: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=76.7MiB (80.4MB), run=2005-2005msec 00:29:06.642 WRITE: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=76.8MiB (80.5MB), run=2005-2005msec 00:29:06.642 16:16:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:06.642 16:16:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b97d4fbd-30e2-4b11-b7cc-8d566a423d1b 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b97d4fbd-30e2-4b11-b7cc-8d566a423d1b 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b97d4fbd-30e2-4b11-b7cc-8d566a423d1b 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:08.010 { 00:29:08.010 "uuid": "b5f168a9-cf04-4fc3-b0e1-ad1a0d8d025f", 00:29:08.010 "name": "lvs_0", 00:29:08.010 "base_bdev": "Nvme0n1", 00:29:08.010 "total_data_clusters": 1862, 00:29:08.010 "free_clusters": 0, 00:29:08.010 "block_size": 512, 00:29:08.010 "cluster_size": 1073741824 00:29:08.010 }, 00:29:08.010 { 00:29:08.010 "uuid": "b97d4fbd-30e2-4b11-b7cc-8d566a423d1b", 00:29:08.010 "name": "lvs_n_0", 00:29:08.010 "base_bdev": "3b3a5a1a-3dc6-475a-9c23-9cc427bc4803", 00:29:08.010 "total_data_clusters": 476206, 00:29:08.010 "free_clusters": 476206, 00:29:08.010 "block_size": 512, 00:29:08.010 "cluster_size": 4194304 00:29:08.010 } 00:29:08.010 ]' 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b97d4fbd-30e2-4b11-b7cc-8d566a423d1b") .free_clusters' 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=476206 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b97d4fbd-30e2-4b11-b7cc-8d566a423d1b") .cluster_size' 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1904824 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1904824 00:29:08.010 1904824 00:29:08.010 16:16:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:29:08.940 c4b54d9c-6ed3-4ef7-b361-71a0b927f341 00:29:08.940 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:09.197 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:09.197 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:09.454 16:16:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:09.726 16:16:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:09.726 16:16:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:09.726 16:16:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:09.726 16:16:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:09.986 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:09.986 fio-3.35 00:29:09.986 Starting 1 thread 00:29:12.507 00:29:12.507 test: (groupid=0, jobs=1): err= 0: pid=2961958: Sun Dec 15 16:16:40 2024 00:29:12.507 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(80.5MiB/2006msec) 00:29:12.507 slat (nsec): min=1363, max=22792, avg=1480.28, stdev=240.25 00:29:12.507 clat (usec): min=2818, max=10695, avg=6163.91, stdev=172.60 00:29:12.507 lat (usec): min=2821, max=10697, avg=6165.39, stdev=172.56 00:29:12.507 clat percentiles (usec): 00:29:12.507 | 1.00th=[ 6063], 5.00th=[ 6128], 10.00th=[ 6128], 20.00th=[ 6128], 00:29:12.507 | 30.00th=[ 6128], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6194], 00:29:12.507 | 70.00th=[ 6194], 80.00th=[ 6194], 90.00th=[ 6194], 95.00th=[ 6194], 00:29:12.507 | 99.00th=[ 6390], 99.50th=[ 6456], 99.90th=[ 9110], 99.95th=[ 9765], 00:29:12.507 | 99.99th=[10683] 00:29:12.507 bw ( KiB/s): min=39744, max=41920, per=100.00%, avg=41080.00, stdev=936.65, samples=4 00:29:12.507 iops : min= 9936, max=10480, avg=10270.00, stdev=234.16, samples=4 00:29:12.507 write: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(80.5MiB/2006msec); 0 zone resets 00:29:12.507 slat (nsec): min=1396, max=17438, avg=1571.95, stdev=233.85 00:29:12.507 clat (usec): min=2821, max=11260, avg=6183.51, stdev=184.93 00:29:12.507 lat (usec): min=2825, max=11262, avg=6185.08, stdev=184.92 00:29:12.507 clat percentiles (usec): 00:29:12.507 | 1.00th=[ 6128], 5.00th=[ 6128], 10.00th=[ 6128], 20.00th=[ 6128], 00:29:12.507 | 30.00th=[ 6194], 40.00th=[ 6194], 50.00th=[ 6194], 60.00th=[ 6194], 00:29:12.507 | 70.00th=[ 6194], 80.00th=[ 6194], 90.00th=[ 6194], 95.00th=[ 6259], 00:29:12.507 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 9110], 99.95th=[10683], 00:29:12.507 | 99.99th=[11207] 00:29:12.507 bw ( KiB/s): min=40224, max=41584, per=99.96%, avg=41088.00, stdev=598.81, samples=4 00:29:12.507 iops : min=10056, max=10396, avg=10272.00, stdev=149.70, samples=4 00:29:12.507 lat (msec) : 4=0.03%, 10=99.93%, 20=0.04% 00:29:12.507 cpu : usr=99.50%, sys=0.10%, ctx=15, majf=0, minf=2 00:29:12.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:12.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.507 issued rwts: total=20598,20613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.507 00:29:12.507 Run status group 0 (all jobs): 00:29:12.507 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=80.5MiB (84.4MB), run=2006-2006msec 00:29:12.507 WRITE: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=80.5MiB (84.4MB), run=2006-2006msec 00:29:12.507 16:16:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:12.507 16:16:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:12.507 16:16:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:20.599 16:16:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:20.599 16:16:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:25.852 16:16:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:25.852 16:16:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:29.124 rmmod nvme_rdma 00:29:29.124 rmmod nvme_fabrics 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 2957373 ']' 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 2957373 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2957373 ']' 00:29:29.124 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2957373 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2957373 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2957373' 00:29:29.125 killing process with pid 2957373 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2957373 00:29:29.125 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2957373 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:29:29.382 00:29:29.382 real 0m50.124s 00:29:29.382 user 3m39.856s 00:29:29.382 sys 0m7.775s 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.382 ************************************ 00:29:29.382 END TEST nvmf_fio_host 00:29:29.382 ************************************ 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.382 ************************************ 00:29:29.382 START TEST nvmf_failover 00:29:29.382 ************************************ 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:29.382 * Looking for test storage... 00:29:29.382 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:29:29.382 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.640 16:16:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:29.640 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.640 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.641 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.641 16:16:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:36.195 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:36.195 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:36.196 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:36.196 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:36.196 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # rdma_device_init 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@526 -- # allocate_nic_ips 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:36.196 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.196 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:36.196 altname enp217s0f0np0 00:29:36.196 altname ens818f0np0 00:29:36.196 inet 192.168.100.8/24 scope global mlx_0_0 00:29:36.196 valid_lft forever preferred_lft forever 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:36.196 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.196 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:36.196 altname enp217s0f1np1 00:29:36.196 altname ens818f1np1 00:29:36.196 inet 192.168.100.9/24 scope global mlx_0_1 00:29:36.196 valid_lft forever preferred_lft forever 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.196 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:29:36.197 192.168.100.9' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:29:36.197 192.168.100.9' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # head -n 1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:29:36.197 192.168.100.9' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # tail -n +2 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # head -n 1 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=2968307 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 2968307 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2968307 ']' 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:36.197 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.197 [2024-12-15 16:17:04.738941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:36.197 [2024-12-15 16:17:04.738990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.454 [2024-12-15 16:17:04.808721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.454 [2024-12-15 16:17:04.847194] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.454 [2024-12-15 16:17:04.847234] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.454 [2024-12-15 16:17:04.847243] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.454 [2024-12-15 16:17:04.847251] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.454 [2024-12-15 16:17:04.847274] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.454 [2024-12-15 16:17:04.847323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.454 [2024-12-15 16:17:04.847406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.454 [2024-12-15 16:17:04.847408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.454 16:17:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:36.710 [2024-12-15 16:17:05.177062] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x226e5c0/0x2272ab0) succeed. 00:29:36.711 [2024-12-15 16:17:05.187592] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x226fb60/0x22b4150) succeed. 00:29:36.967 16:17:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:36.967 Malloc0 00:29:36.967 16:17:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.223 16:17:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.480 16:17:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:37.737 [2024-12-15 16:17:06.087914] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:37.737 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:37.737 [2024-12-15 16:17:06.268233] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:37.737 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:37.994 [2024-12-15 16:17:06.452906] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2968600 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2968600 /var/tmp/bdevperf.sock 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2968600 ']' 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.994 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:38.251 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.251 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:38.251 16:17:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.507 NVMe0n1 00:29:38.507 16:17:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.764 00:29:38.764 16:17:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.764 16:17:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2968782 00:29:38.764 16:17:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:40.133 16:17:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:40.133 16:17:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:43.409 16:17:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:43.409 00:29:43.409 16:17:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:43.409 16:17:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:46.749 16:17:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:46.749 [2024-12-15 16:17:15.101012] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:46.749 16:17:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:47.681 16:17:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:47.939 16:17:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2968782 00:29:54.503 { 00:29:54.503 "results": [ 00:29:54.503 { 00:29:54.503 "job": "NVMe0n1", 00:29:54.503 "core_mask": "0x1", 00:29:54.503 "workload": "verify", 00:29:54.503 "status": "finished", 00:29:54.503 "verify_range": { 00:29:54.503 "start": 0, 00:29:54.503 "length": 16384 00:29:54.503 }, 00:29:54.503 "queue_depth": 128, 00:29:54.503 "io_size": 4096, 00:29:54.503 "runtime": 15.005722, 00:29:54.503 "iops": 14594.499351647324, 00:29:54.503 "mibps": 57.00976309237236, 00:29:54.503 "io_failed": 4621, 00:29:54.503 "io_timeout": 0, 00:29:54.503 "avg_latency_us": 8569.436697684485, 00:29:54.503 "min_latency_us": 340.7872, 00:29:54.503 "max_latency_us": 1046898.2784 00:29:54.503 } 00:29:54.503 ], 00:29:54.503 "core_count": 1 00:29:54.503 } 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2968600 ']' 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2968600' 00:29:54.503 killing process with pid 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2968600 00:29:54.503 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:54.503 [2024-12-15 16:17:06.512612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:54.503 [2024-12-15 16:17:06.512671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2968600 ] 00:29:54.503 [2024-12-15 16:17:06.583977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.503 [2024-12-15 16:17:06.623114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.503 Running I/O for 15 seconds... 00:29:54.503 18432.00 IOPS, 72.00 MiB/s [2024-12-15T15:17:23.073Z] 9856.50 IOPS, 38.50 MiB/s [2024-12-15T15:17:23.073Z] [2024-12-15 16:17:09.442132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x180f00 00:29:54.503 [2024-12-15 16:17:09.442170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.503 [2024-12-15 16:17:09.442187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x180f00 00:29:54.503 [2024-12-15 16:17:09.442197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.503 [2024-12-15 16:17:09.442208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x180f00 00:29:54.503 [2024-12-15 16:17:09.442218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.503 [2024-12-15 16:17:09.442228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x180f00 00:29:54.503 [2024-12-15 16:17:09.442237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.503 [2024-12-15 16:17:09.442247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x180f00 00:29:54.503 [2024-12-15 16:17:09.442256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.503 [2024-12-15 16:17:09.442267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180f00 00:29:54.504 [2024-12-15 16:17:09.442981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.504 [2024-12-15 16:17:09.442991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.505 [2024-12-15 16:17:09.443711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180f00 00:29:54.505 [2024-12-15 16:17:09.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.443982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.443990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180f00 00:29:54.506 [2024-12-15 16:17:09.444432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.506 [2024-12-15 16:17:09.444442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.453932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.453946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.453965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.453974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.453985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.454101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180f00 00:29:54.507 [2024-12-15 16:17:09.454110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.455836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.507 [2024-12-15 16:17:09.455849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.507 [2024-12-15 16:17:09.455858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27648 len:8 PRP1 0x0 PRP2 0x0 00:29:54.507 [2024-12-15 16:17:09.455867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.455909] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:29:54.507 [2024-12-15 16:17:09.455920] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:29:54.507 [2024-12-15 16:17:09.455930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.507 [2024-12-15 16:17:09.455966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.507 [2024-12-15 16:17:09.455977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:129bd50 sqhd:aca0 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.455987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.507 [2024-12-15 16:17:09.455995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:129bd50 sqhd:aca0 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.456005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.507 [2024-12-15 16:17:09.456013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:129bd50 sqhd:aca0 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.456022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.507 [2024-12-15 16:17:09.456031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:129bd50 sqhd:aca0 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:09.473042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:54.507 [2024-12-15 16:17:09.473060] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:54.507 [2024-12-15 16:17:09.473069] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:54.507 [2024-12-15 16:17:09.475807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.507 [2024-12-15 16:17:09.519159] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.507 11763.00 IOPS, 45.95 MiB/s [2024-12-15T15:17:23.077Z] 13428.50 IOPS, 52.46 MiB/s [2024-12-15T15:17:23.077Z] 12643.40 IOPS, 49.39 MiB/s [2024-12-15T15:17:23.077Z] [2024-12-15 16:17:12.904497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.507 [2024-12-15 16:17:12.904535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.507 [2024-12-15 16:17:12.904567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.507 [2024-12-15 16:17:12.904596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x181600 00:29:54.507 [2024-12-15 16:17:12.904813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.507 [2024-12-15 16:17:12.904832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.507 [2024-12-15 16:17:12.904851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.507 [2024-12-15 16:17:12.904863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.904985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.904995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.508 [2024-12-15 16:17:12.905474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.508 [2024-12-15 16:17:12.905601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181600 00:29:54.508 [2024-12-15 16:17:12.905609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.905981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.905992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x181600 00:29:54.509 [2024-12-15 16:17:12.906107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.509 [2024-12-15 16:17:12.906408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.509 [2024-12-15 16:17:12.906417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.510 [2024-12-15 16:17:12.906796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.906983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.906992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.907134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x181600 00:29:54.510 [2024-12-15 16:17:12.907142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.510 [2024-12-15 16:17:12.908852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.510 [2024-12-15 16:17:12.908873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.510 [2024-12-15 16:17:12.908886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124816 len:8 PRP1 0x0 PRP2 0x0 00:29:54.510 [2024-12-15 16:17:12.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:12.908941] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:29:54.511 [2024-12-15 16:17:12.908952] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:29:54.511 [2024-12-15 16:17:12.908962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.511 [2024-12-15 16:17:12.911761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.511 [2024-12-15 16:17:12.926040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:54.511 [2024-12-15 16:17:12.972785] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.511 11783.00 IOPS, 46.03 MiB/s [2024-12-15T15:17:23.081Z] 12764.71 IOPS, 49.86 MiB/s [2024-12-15T15:17:23.081Z] 13503.38 IOPS, 52.75 MiB/s [2024-12-15T15:17:23.081Z] 13931.67 IOPS, 54.42 MiB/s [2024-12-15T15:17:23.081Z] [2024-12-15 16:17:17.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.511 [2024-12-15 16:17:17.322919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.511 [2024-12-15 16:17:17.322929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.322938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.322949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.322958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.322968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.322977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.322987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.322996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.323495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.512 [2024-12-15 16:17:17.323516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.512 [2024-12-15 16:17:17.323587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180f00 00:29:54.512 [2024-12-15 16:17:17.323596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.323990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.323998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.513 [2024-12-15 16:17:17.324280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.513 [2024-12-15 16:17:17.324350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180f00 00:29:54.513 [2024-12-15 16:17:17.324359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180f00 00:29:54.514 [2024-12-15 16:17:17.324379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180f00 00:29:54.514 [2024-12-15 16:17:17.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180f00 00:29:54.514 [2024-12-15 16:17:17.324417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180f00 00:29:54.514 [2024-12-15 16:17:17.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.324728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.514 [2024-12-15 16:17:17.324738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6fd29000 sqhd:7250 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.326654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.514 [2024-12-15 16:17:17.326668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.514 [2024-12-15 16:17:17.326677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111792 len:8 PRP1 0x0 PRP2 0x0 00:29:54.514 [2024-12-15 16:17:17.326691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.514 [2024-12-15 16:17:17.326736] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:29:54.514 [2024-12-15 16:17:17.326747] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:29:54.514 [2024-12-15 16:17:17.326757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.514 [2024-12-15 16:17:17.329496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.514 [2024-12-15 16:17:17.343694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:54.514 [2024-12-15 16:17:17.390300] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.514 12544.90 IOPS, 49.00 MiB/s [2024-12-15T15:17:23.084Z] 13096.82 IOPS, 51.16 MiB/s [2024-12-15T15:17:23.084Z] 13565.92 IOPS, 52.99 MiB/s [2024-12-15T15:17:23.084Z] 13959.85 IOPS, 54.53 MiB/s [2024-12-15T15:17:23.084Z] 14300.07 IOPS, 55.86 MiB/s [2024-12-15T15:17:23.084Z] 14594.20 IOPS, 57.01 MiB/s 00:29:54.514 Latency(us) 00:29:54.514 [2024-12-15T15:17:23.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:54.514 Verification LBA range: start 0x0 length 0x4000 00:29:54.514 NVMe0n1 : 15.01 14594.50 57.01 307.95 0.00 8569.44 340.79 1046898.28 00:29:54.514 [2024-12-15T15:17:23.084Z] =================================================================================================================== 00:29:54.514 [2024-12-15T15:17:23.084Z] Total : 14594.50 57.01 307.95 0.00 8569.44 340.79 1046898.28 00:29:54.514 Received shutdown signal, test time was about 15.000000 seconds 00:29:54.514 00:29:54.514 Latency(us) 00:29:54.514 [2024-12-15T15:17:23.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.514 [2024-12-15T15:17:23.084Z] =================================================================================================================== 00:29:54.514 [2024-12-15T15:17:23.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2971269 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2971269 /var/tmp/bdevperf.sock 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2971269 ']' 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:54.514 16:17:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:54.772 [2024-12-15 16:17:23.097167] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:54.772 16:17:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:54.772 [2024-12-15 16:17:23.305883] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:54.772 16:17:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.338 NVMe0n1 00:29:55.338 16:17:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.338 00:29:55.338 16:17:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.596 00:29:55.596 16:17:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:55.596 16:17:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:55.854 16:17:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:56.111 16:17:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:59.398 16:17:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.398 16:17:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:59.398 16:17:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2972082 00:29:59.398 16:17:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:59.398 16:17:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2972082 00:30:00.330 { 00:30:00.330 "results": [ 00:30:00.330 { 00:30:00.330 "job": "NVMe0n1", 00:30:00.330 "core_mask": "0x1", 00:30:00.330 "workload": "verify", 00:30:00.330 "status": "finished", 00:30:00.330 "verify_range": { 00:30:00.330 "start": 0, 00:30:00.330 "length": 16384 00:30:00.330 }, 00:30:00.330 "queue_depth": 128, 00:30:00.330 "io_size": 4096, 00:30:00.330 "runtime": 1.007718, 00:30:00.330 "iops": 18417.851025783006, 00:30:00.330 "mibps": 71.94473056946487, 00:30:00.330 "io_failed": 0, 00:30:00.330 "io_timeout": 0, 00:30:00.330 "avg_latency_us": 6913.267994482759, 00:30:00.330 "min_latency_us": 2477.2608, 00:30:00.330 "max_latency_us": 12268.3392 00:30:00.330 } 00:30:00.330 ], 00:30:00.330 "core_count": 1 00:30:00.330 } 00:30:00.330 16:17:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:00.330 [2024-12-15 16:17:22.729301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:00.330 [2024-12-15 16:17:22.729358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971269 ] 00:30:00.330 [2024-12-15 16:17:22.801252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.330 [2024-12-15 16:17:22.837033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.330 [2024-12-15 16:17:24.483213] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:00.330 [2024-12-15 16:17:24.483849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.330 [2024-12-15 16:17:24.483880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.330 [2024-12-15 16:17:24.503672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:00.330 [2024-12-15 16:17:24.520067] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.330 Running I/O for 1 seconds... 00:30:00.330 18407.00 IOPS, 71.90 MiB/s 00:30:00.330 Latency(us) 00:30:00.330 [2024-12-15T15:17:28.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.330 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.330 Verification LBA range: start 0x0 length 0x4000 00:30:00.330 NVMe0n1 : 1.01 18417.85 71.94 0.00 0.00 6913.27 2477.26 12268.34 00:30:00.330 [2024-12-15T15:17:28.900Z] =================================================================================================================== 00:30:00.330 [2024-12-15T15:17:28.900Z] Total : 18417.85 71.94 0.00 0.00 6913.27 2477.26 12268.34 00:30:00.330 16:17:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:00.330 16:17:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:00.587 16:17:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:00.844 16:17:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:00.844 16:17:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:01.101 16:17:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.101 16:17:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2971269 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2971269 ']' 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2971269 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2971269 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2971269' 00:30:04.380 killing process with pid 2971269 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2971269 00:30:04.380 16:17:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2971269 00:30:04.637 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:04.638 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:04.896 rmmod nvme_rdma 00:30:04.896 rmmod nvme_fabrics 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 2968307 ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 2968307 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2968307 ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2968307 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2968307 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2968307' 00:30:04.896 killing process with pid 2968307 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2968307 00:30:04.896 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2968307 00:30:05.154 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:05.154 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:30:05.154 00:30:05.154 real 0m35.879s 00:30:05.154 user 1m58.623s 00:30:05.154 sys 0m7.390s 00:30:05.154 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.154 16:17:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:05.154 ************************************ 00:30:05.154 END TEST nvmf_failover 00:30:05.154 ************************************ 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.413 ************************************ 00:30:05.413 START TEST nvmf_host_discovery 00:30:05.413 ************************************ 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:05.413 * Looking for test storage... 00:30:05.413 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.413 --rc genhtml_branch_coverage=1 00:30:05.413 --rc genhtml_function_coverage=1 00:30:05.413 --rc genhtml_legend=1 00:30:05.413 --rc geninfo_all_blocks=1 00:30:05.413 --rc geninfo_unexecuted_blocks=1 00:30:05.413 00:30:05.413 ' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.413 --rc genhtml_branch_coverage=1 00:30:05.413 --rc genhtml_function_coverage=1 00:30:05.413 --rc genhtml_legend=1 00:30:05.413 --rc geninfo_all_blocks=1 00:30:05.413 --rc geninfo_unexecuted_blocks=1 00:30:05.413 00:30:05.413 ' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.413 --rc genhtml_branch_coverage=1 00:30:05.413 --rc genhtml_function_coverage=1 00:30:05.413 --rc genhtml_legend=1 00:30:05.413 --rc geninfo_all_blocks=1 00:30:05.413 --rc geninfo_unexecuted_blocks=1 00:30:05.413 00:30:05.413 ' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.413 --rc genhtml_branch_coverage=1 00:30:05.413 --rc genhtml_function_coverage=1 00:30:05.413 --rc genhtml_legend=1 00:30:05.413 --rc geninfo_all_blocks=1 00:30:05.413 --rc geninfo_unexecuted_blocks=1 00:30:05.413 00:30:05.413 ' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:05.413 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.414 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:05.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:05.672 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:05.672 00:30:05.672 real 0m0.226s 00:30:05.672 user 0m0.130s 00:30:05.672 sys 0m0.111s 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.672 16:17:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.672 ************************************ 00:30:05.672 END TEST nvmf_host_discovery 00:30:05.672 ************************************ 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.672 ************************************ 00:30:05.672 START TEST nvmf_host_multipath_status 00:30:05.672 ************************************ 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:05.672 * Looking for test storage... 00:30:05.672 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.672 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.931 --rc genhtml_branch_coverage=1 00:30:05.931 --rc genhtml_function_coverage=1 00:30:05.931 --rc genhtml_legend=1 00:30:05.931 --rc geninfo_all_blocks=1 00:30:05.931 --rc geninfo_unexecuted_blocks=1 00:30:05.931 00:30:05.931 ' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.931 --rc genhtml_branch_coverage=1 00:30:05.931 --rc genhtml_function_coverage=1 00:30:05.931 --rc genhtml_legend=1 00:30:05.931 --rc geninfo_all_blocks=1 00:30:05.931 --rc geninfo_unexecuted_blocks=1 00:30:05.931 00:30:05.931 ' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.931 --rc genhtml_branch_coverage=1 00:30:05.931 --rc genhtml_function_coverage=1 00:30:05.931 --rc genhtml_legend=1 00:30:05.931 --rc geninfo_all_blocks=1 00:30:05.931 --rc geninfo_unexecuted_blocks=1 00:30:05.931 00:30:05.931 ' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.931 --rc genhtml_branch_coverage=1 00:30:05.931 --rc genhtml_function_coverage=1 00:30:05.931 --rc genhtml_legend=1 00:30:05.931 --rc geninfo_all_blocks=1 00:30:05.931 --rc geninfo_unexecuted_blocks=1 00:30:05.931 00:30:05.931 ' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.931 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:05.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.932 16:17:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:12.492 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:12.492 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:12.492 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:12.492 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # rdma_device_init 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:12.492 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@526 -- # allocate_nic_ips 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:12.493 16:17:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:12.493 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:12.493 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:12.493 altname enp217s0f0np0 00:30:12.493 altname ens818f0np0 00:30:12.493 inet 192.168.100.8/24 scope global mlx_0_0 00:30:12.493 valid_lft forever preferred_lft forever 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:12.493 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:12.751 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:12.751 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:12.751 altname enp217s0f1np1 00:30:12.751 altname ens818f1np1 00:30:12.751 inet 192.168.100.9/24 scope global mlx_0_1 00:30:12.751 valid_lft forever preferred_lft forever 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:12.751 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:30:12.752 192.168.100.9' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:30:12.752 192.168.100.9' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # head -n 1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:30:12.752 192.168.100.9' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # tail -n +2 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # head -n 1 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=2976601 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 2976601 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2976601 ']' 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.752 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:12.752 [2024-12-15 16:17:41.240944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:12.752 [2024-12-15 16:17:41.240997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.752 [2024-12-15 16:17:41.313070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.010 [2024-12-15 16:17:41.352457] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.010 [2024-12-15 16:17:41.352498] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.010 [2024-12-15 16:17:41.352507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.010 [2024-12-15 16:17:41.352516] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.010 [2024-12-15 16:17:41.352523] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.010 [2024-12-15 16:17:41.354706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.010 [2024-12-15 16:17:41.354709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2976601 00:30:13.010 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:13.268 [2024-12-15 16:17:41.670427] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12f7720/0x12fbc10) succeed. 00:30:13.268 [2024-12-15 16:17:41.679438] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12f8c20/0x133d2b0) succeed. 00:30:13.268 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:13.526 Malloc0 00:30:13.526 16:17:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:13.784 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.784 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:14.042 [2024-12-15 16:17:42.490714] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:14.042 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:14.300 [2024-12-15 16:17:42.683027] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2976838 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2976838 /var/tmp/bdevperf.sock 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2976838 ']' 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:14.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.300 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.558 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.558 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:14.558 16:17:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:14.558 16:17:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:15.123 Nvme0n1 00:30:15.123 16:17:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:15.123 Nvme0n1 00:30:15.123 16:17:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:15.123 16:17:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:17.650 16:17:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:17.650 16:17:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:17.650 16:17:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:17.650 16:17:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:18.583 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:18.583 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.583 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.583 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.840 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.840 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.840 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.840 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.098 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.356 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.356 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.356 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.356 16:17:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.613 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.613 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.613 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.613 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.869 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.869 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:19.869 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:19.869 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:20.125 16:17:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:21.057 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:21.057 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:21.057 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.057 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.314 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.314 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:21.314 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.314 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.572 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.572 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.572 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.572 16:17:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.830 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.088 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.088 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:22.088 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.088 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.346 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.346 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:22.346 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:22.611 16:17:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:22.611 16:17:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.985 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.243 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.243 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.243 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.243 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:24.500 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.500 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:24.500 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.500 16:17:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:24.758 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:25.016 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:25.273 16:17:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:26.304 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:26.304 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:26.304 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.304 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:26.570 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.570 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:26.570 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.570 16:17:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:26.570 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:26.570 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:26.570 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.570 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:26.828 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.828 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:26.828 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.828 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.086 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:27.344 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.344 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:27.344 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.344 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:27.344 16:17:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:27.602 16:17:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:27.859 16:17:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:28.793 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:28.793 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:28.793 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.793 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:29.051 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:29.051 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:29.051 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.051 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.309 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:29.567 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.567 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:29.567 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.567 16:17:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:29.824 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:30.082 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:30.340 16:17:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:31.274 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:31.274 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:31.274 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.274 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:31.532 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.532 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:31.532 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.532 16:17:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.790 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:32.048 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.048 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:32.048 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:32.048 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.306 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:32.306 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:32.306 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.306 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:32.564 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.564 16:18:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:32.564 16:18:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:32.564 16:18:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:32.822 16:18:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:33.080 16:18:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:34.014 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:34.014 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:34.014 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.014 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:34.272 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.272 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:34.272 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.272 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:34.529 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.529 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:34.529 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.529 16:18:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.530 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.530 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.788 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:35.045 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.045 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:35.046 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.046 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:35.303 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.303 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:35.304 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:35.562 16:18:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:35.562 16:18:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.935 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.194 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.194 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.194 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.194 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.452 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.452 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.452 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.452 16:18:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:37.710 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:37.968 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:38.226 16:18:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:39.161 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:39.161 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.161 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.161 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:39.419 16:18:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.677 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.677 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:39.677 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:39.677 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.935 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.935 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:39.935 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.935 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:40.194 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:40.452 16:18:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:40.710 16:18:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:41.645 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:41.645 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:41.645 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.645 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:41.903 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.903 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:41.903 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:41.903 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.161 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.162 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:42.420 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.420 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:42.420 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.420 16:18:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:42.678 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.678 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:42.678 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.678 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2976838 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2976838 ']' 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2976838 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2976838 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2976838' 00:30:42.936 killing process with pid 2976838 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2976838 00:30:42.936 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2976838 00:30:42.936 { 00:30:42.936 "results": [ 00:30:42.936 { 00:30:42.936 "job": "Nvme0n1", 00:30:42.936 "core_mask": "0x4", 00:30:42.936 "workload": "verify", 00:30:42.936 "status": "terminated", 00:30:42.936 "verify_range": { 00:30:42.936 "start": 0, 00:30:42.936 "length": 16384 00:30:42.936 }, 00:30:42.936 "queue_depth": 128, 00:30:42.936 "io_size": 4096, 00:30:42.936 "runtime": 27.561707, 00:30:42.936 "iops": 16045.631716497095, 00:30:42.936 "mibps": 62.67824889256678, 00:30:42.936 "io_failed": 0, 00:30:42.936 "io_timeout": 0, 00:30:42.936 "avg_latency_us": 7958.045869184728, 00:30:42.936 "min_latency_us": 60.6208, 00:30:42.936 "max_latency_us": 3019898.88 00:30:42.936 } 00:30:42.936 ], 00:30:42.936 "core_count": 1 00:30:42.936 } 00:30:43.197 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2976838 00:30:43.197 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:43.197 [2024-12-15 16:17:42.746931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:43.197 [2024-12-15 16:17:42.746989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976838 ] 00:30:43.197 [2024-12-15 16:17:42.815264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.197 [2024-12-15 16:17:42.853884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.197 [2024-12-15 16:17:43.612004] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:30:43.197 Running I/O for 90 seconds... 00:30:43.197 18304.00 IOPS, 71.50 MiB/s [2024-12-15T15:18:11.767Z] 18512.00 IOPS, 72.31 MiB/s [2024-12-15T15:18:11.767Z] 18560.00 IOPS, 72.50 MiB/s [2024-12-15T15:18:11.767Z] 18592.00 IOPS, 72.62 MiB/s [2024-12-15T15:18:11.767Z] 18611.20 IOPS, 72.70 MiB/s [2024-12-15T15:18:11.767Z] 18635.33 IOPS, 72.79 MiB/s [2024-12-15T15:18:11.767Z] 18647.43 IOPS, 72.84 MiB/s [2024-12-15T15:18:11.767Z] 18651.75 IOPS, 72.86 MiB/s [2024-12-15T15:18:11.767Z] 18657.00 IOPS, 72.88 MiB/s [2024-12-15T15:18:11.767Z] 18659.00 IOPS, 72.89 MiB/s [2024-12-15T15:18:11.767Z] 18665.73 IOPS, 72.91 MiB/s [2024-12-15T15:18:11.767Z] 18661.75 IOPS, 72.90 MiB/s [2024-12-15T15:18:11.767Z] [2024-12-15 16:17:56.012390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.197 [2024-12-15 16:17:56.012428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:43.197 [2024-12-15 16:17:56.012477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.197 [2024-12-15 16:17:56.012488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:43.197 [2024-12-15 16:17:56.012501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.197 [2024-12-15 16:17:56.012511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:43.197 [2024-12-15 16:17:56.012526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.197 [2024-12-15 16:17:56.012536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.012980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.198 [2024-12-15 16:17:56.012989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182500 00:30:43.198 [2024-12-15 16:17:56.013344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:43.198 [2024-12-15 16:17:56.013356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.199 [2024-12-15 16:17:56.013785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.013979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.013991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:43.199 [2024-12-15 16:17:56.014139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182500 00:30:43.199 [2024-12-15 16:17:56.014148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182500 00:30:43.200 [2024-12-15 16:17:56.014488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.014988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.014997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:43.200 [2024-12-15 16:17:56.015620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.200 [2024-12-15 16:17:56.015629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:17:56.015655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:17:56.015680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:17:56.015711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:17:56.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:17:56.015763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:17:56.015789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:17:56.015816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:17:56.015842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:17:56.015859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:17:56.015868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:43.201 17585.23 IOPS, 68.69 MiB/s [2024-12-15T15:18:11.771Z] 16329.14 IOPS, 63.79 MiB/s [2024-12-15T15:18:11.771Z] 15240.53 IOPS, 59.53 MiB/s [2024-12-15T15:18:11.771Z] 15171.88 IOPS, 59.27 MiB/s [2024-12-15T15:18:11.771Z] 15375.35 IOPS, 60.06 MiB/s [2024-12-15T15:18:11.771Z] 15482.33 IOPS, 60.48 MiB/s [2024-12-15T15:18:11.771Z] 15475.53 IOPS, 60.45 MiB/s [2024-12-15T15:18:11.771Z] 15472.25 IOPS, 60.44 MiB/s [2024-12-15T15:18:11.771Z] 15617.48 IOPS, 61.01 MiB/s [2024-12-15T15:18:11.771Z] 15764.45 IOPS, 61.58 MiB/s [2024-12-15T15:18:11.771Z] 15866.70 IOPS, 61.98 MiB/s [2024-12-15T15:18:11.771Z] 15849.08 IOPS, 61.91 MiB/s [2024-12-15T15:18:11.771Z] 15828.84 IOPS, 61.83 MiB/s [2024-12-15T15:18:11.771Z] [2024-12-15 16:18:09.099797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.099835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.099878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.099922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.099943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.099963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.099974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.099983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.201 [2024-12-15 16:18:09.100733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:43.201 [2024-12-15 16:18:09.100849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182500 00:30:43.201 [2024-12-15 16:18:09.100858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.100879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.100921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.100947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.100968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.100979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.100989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182500 00:30:43.202 [2024-12-15 16:18:09.101451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.202 [2024-12-15 16:18:09.101534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:43.202 [2024-12-15 16:18:09.101545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.203 [2024-12-15 16:18:09.101555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:43.203 [2024-12-15 16:18:09.101566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.203 [2024-12-15 16:18:09.101576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:43.203 [2024-12-15 16:18:09.101588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.203 [2024-12-15 16:18:09.101597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.203 [2024-12-15 16:18:09.101608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:43.203 [2024-12-15 16:18:09.101617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:43.203 [2024-12-15 16:18:09.101629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182500 00:30:43.203 [2024-12-15 16:18:09.101638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:43.203 15900.73 IOPS, 62.11 MiB/s [2024-12-15T15:18:11.773Z] 15998.96 IOPS, 62.50 MiB/s [2024-12-15T15:18:11.773Z] Received shutdown signal, test time was about 27.562337 seconds 00:30:43.203 00:30:43.203 Latency(us) 00:30:43.203 [2024-12-15T15:18:11.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.203 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:43.203 Verification LBA range: start 0x0 length 0x4000 00:30:43.203 Nvme0n1 : 27.56 16045.63 62.68 0.00 0.00 7958.05 60.62 3019898.88 00:30:43.203 [2024-12-15T15:18:11.773Z] =================================================================================================================== 00:30:43.203 [2024-12-15T15:18:11.773Z] Total : 16045.63 62.68 0.00 0.00 7958.05 60.62 3019898.88 00:30:43.203 [2024-12-15 16:18:11.359642] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:30:43.203 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.203 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:43.203 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:43.461 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:43.462 rmmod nvme_rdma 00:30:43.462 rmmod nvme_fabrics 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 2976601 ']' 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 2976601 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2976601 ']' 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2976601 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2976601 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2976601' 00:30:43.462 killing process with pid 2976601 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2976601 00:30:43.462 16:18:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2976601 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:30:43.720 00:30:43.720 real 0m38.072s 00:30:43.720 user 1m47.463s 00:30:43.720 sys 0m9.305s 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.720 ************************************ 00:30:43.720 END TEST nvmf_host_multipath_status 00:30:43.720 ************************************ 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.720 ************************************ 00:30:43.720 START TEST nvmf_discovery_remove_ifc 00:30:43.720 ************************************ 00:30:43.720 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:43.980 * Looking for test storage... 00:30:43.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:43.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.980 --rc genhtml_branch_coverage=1 00:30:43.980 --rc genhtml_function_coverage=1 00:30:43.980 --rc genhtml_legend=1 00:30:43.980 --rc geninfo_all_blocks=1 00:30:43.980 --rc geninfo_unexecuted_blocks=1 00:30:43.980 00:30:43.980 ' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:43.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.980 --rc genhtml_branch_coverage=1 00:30:43.980 --rc genhtml_function_coverage=1 00:30:43.980 --rc genhtml_legend=1 00:30:43.980 --rc geninfo_all_blocks=1 00:30:43.980 --rc geninfo_unexecuted_blocks=1 00:30:43.980 00:30:43.980 ' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:43.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.980 --rc genhtml_branch_coverage=1 00:30:43.980 --rc genhtml_function_coverage=1 00:30:43.980 --rc genhtml_legend=1 00:30:43.980 --rc geninfo_all_blocks=1 00:30:43.980 --rc geninfo_unexecuted_blocks=1 00:30:43.980 00:30:43.980 ' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:43.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.980 --rc genhtml_branch_coverage=1 00:30:43.980 --rc genhtml_function_coverage=1 00:30:43.980 --rc genhtml_legend=1 00:30:43.980 --rc geninfo_all_blocks=1 00:30:43.980 --rc geninfo_unexecuted_blocks=1 00:30:43.980 00:30:43.980 ' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.980 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:43.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:43.981 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:30:43.981 00:30:43.981 real 0m0.214s 00:30:43.981 user 0m0.116s 00:30:43.981 sys 0m0.111s 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.981 ************************************ 00:30:43.981 END TEST nvmf_discovery_remove_ifc 00:30:43.981 ************************************ 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.981 ************************************ 00:30:43.981 START TEST nvmf_identify_kernel_target 00:30:43.981 ************************************ 00:30:43.981 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:44.240 * Looking for test storage... 00:30:44.240 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:44.240 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.241 --rc genhtml_branch_coverage=1 00:30:44.241 --rc genhtml_function_coverage=1 00:30:44.241 --rc genhtml_legend=1 00:30:44.241 --rc geninfo_all_blocks=1 00:30:44.241 --rc geninfo_unexecuted_blocks=1 00:30:44.241 00:30:44.241 ' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.241 --rc genhtml_branch_coverage=1 00:30:44.241 --rc genhtml_function_coverage=1 00:30:44.241 --rc genhtml_legend=1 00:30:44.241 --rc geninfo_all_blocks=1 00:30:44.241 --rc geninfo_unexecuted_blocks=1 00:30:44.241 00:30:44.241 ' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.241 --rc genhtml_branch_coverage=1 00:30:44.241 --rc genhtml_function_coverage=1 00:30:44.241 --rc genhtml_legend=1 00:30:44.241 --rc geninfo_all_blocks=1 00:30:44.241 --rc geninfo_unexecuted_blocks=1 00:30:44.241 00:30:44.241 ' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.241 --rc genhtml_branch_coverage=1 00:30:44.241 --rc genhtml_function_coverage=1 00:30:44.241 --rc genhtml_legend=1 00:30:44.241 --rc geninfo_all_blocks=1 00:30:44.241 --rc geninfo_unexecuted_blocks=1 00:30:44.241 00:30:44.241 ' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.241 16:18:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.806 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:50.807 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:50.807 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:50.807 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:50.807 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # rdma_device_init 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:50.807 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:50.808 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:50.808 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:50.808 altname enp217s0f0np0 00:30:50.808 altname ens818f0np0 00:30:50.808 inet 192.168.100.8/24 scope global mlx_0_0 00:30:50.808 valid_lft forever preferred_lft forever 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:50.808 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:50.808 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:50.808 altname enp217s0f1np1 00:30:50.808 altname ens818f1np1 00:30:50.808 inet 192.168.100.9/24 scope global mlx_0_1 00:30:50.808 valid_lft forever preferred_lft forever 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:30:50.808 192.168.100.9' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:30:50.808 192.168.100.9' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # head -n 1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:30:50.808 192.168.100.9' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # tail -n +2 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # head -n 1 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:30:50.808 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:51.067 16:18:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:30:54.352 Waiting for block devices as requested 00:30:54.352 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:54.352 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:54.352 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:54.611 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:54.611 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:54.611 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:54.611 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:54.869 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:54.869 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:54.869 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:55.127 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:55.127 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:55.127 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:55.385 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:55.385 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:55.385 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:55.644 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:55.644 No valid GPT data, bailing 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:30:55.644 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 192.168.100.8 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo rdma 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:30:55.645 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:55.903 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:30:55.903 00:30:55.903 Discovery Log Number of Records 2, Generation counter 2 00:30:55.903 =====Discovery Log Entry 0====== 00:30:55.903 trtype: rdma 00:30:55.903 adrfam: ipv4 00:30:55.903 subtype: current discovery subsystem 00:30:55.903 treq: not specified, sq flow control disable supported 00:30:55.903 portid: 1 00:30:55.903 trsvcid: 4420 00:30:55.903 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:55.903 traddr: 192.168.100.8 00:30:55.903 eflags: none 00:30:55.903 rdma_prtype: not specified 00:30:55.903 rdma_qptype: connected 00:30:55.903 rdma_cms: rdma-cm 00:30:55.903 rdma_pkey: 0x0000 00:30:55.903 =====Discovery Log Entry 1====== 00:30:55.903 trtype: rdma 00:30:55.903 adrfam: ipv4 00:30:55.903 subtype: nvme subsystem 00:30:55.903 treq: not specified, sq flow control disable supported 00:30:55.903 portid: 1 00:30:55.903 trsvcid: 4420 00:30:55.903 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:55.903 traddr: 192.168.100.8 00:30:55.903 eflags: none 00:30:55.903 rdma_prtype: not specified 00:30:55.903 rdma_qptype: connected 00:30:55.903 rdma_cms: rdma-cm 00:30:55.903 rdma_pkey: 0x0000 00:30:55.903 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:30:55.903 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:56.162 ===================================================== 00:30:56.162 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:56.162 ===================================================== 00:30:56.162 Controller Capabilities/Features 00:30:56.162 ================================ 00:30:56.162 Vendor ID: 0000 00:30:56.162 Subsystem Vendor ID: 0000 00:30:56.162 Serial Number: 20436d8a3eab154fde15 00:30:56.162 Model Number: Linux 00:30:56.162 Firmware Version: 6.8.9-20 00:30:56.162 Recommended Arb Burst: 0 00:30:56.162 IEEE OUI Identifier: 00 00 00 00:30:56.162 Multi-path I/O 00:30:56.162 May have multiple subsystem ports: No 00:30:56.162 May have multiple controllers: No 00:30:56.162 Associated with SR-IOV VF: No 00:30:56.162 Max Data Transfer Size: Unlimited 00:30:56.162 Max Number of Namespaces: 0 00:30:56.162 Max Number of I/O Queues: 1024 00:30:56.162 NVMe Specification Version (VS): 1.3 00:30:56.162 NVMe Specification Version (Identify): 1.3 00:30:56.162 Maximum Queue Entries: 128 00:30:56.162 Contiguous Queues Required: No 00:30:56.162 Arbitration Mechanisms Supported 00:30:56.162 Weighted Round Robin: Not Supported 00:30:56.162 Vendor Specific: Not Supported 00:30:56.162 Reset Timeout: 7500 ms 00:30:56.162 Doorbell Stride: 4 bytes 00:30:56.162 NVM Subsystem Reset: Not Supported 00:30:56.162 Command Sets Supported 00:30:56.162 NVM Command Set: Supported 00:30:56.162 Boot Partition: Not Supported 00:30:56.162 Memory Page Size Minimum: 4096 bytes 00:30:56.162 Memory Page Size Maximum: 4096 bytes 00:30:56.162 Persistent Memory Region: Not Supported 00:30:56.162 Optional Asynchronous Events Supported 00:30:56.162 Namespace Attribute Notices: Not Supported 00:30:56.162 Firmware Activation Notices: Not Supported 00:30:56.162 ANA Change Notices: Not Supported 00:30:56.162 PLE Aggregate Log Change Notices: Not Supported 00:30:56.162 LBA Status Info Alert Notices: Not Supported 00:30:56.162 EGE Aggregate Log Change Notices: Not Supported 00:30:56.162 Normal NVM Subsystem Shutdown event: Not Supported 00:30:56.162 Zone Descriptor Change Notices: Not Supported 00:30:56.162 Discovery Log Change Notices: Supported 00:30:56.162 Controller Attributes 00:30:56.162 128-bit Host Identifier: Not Supported 00:30:56.162 Non-Operational Permissive Mode: Not Supported 00:30:56.162 NVM Sets: Not Supported 00:30:56.162 Read Recovery Levels: Not Supported 00:30:56.162 Endurance Groups: Not Supported 00:30:56.162 Predictable Latency Mode: Not Supported 00:30:56.162 Traffic Based Keep ALive: Not Supported 00:30:56.162 Namespace Granularity: Not Supported 00:30:56.162 SQ Associations: Not Supported 00:30:56.162 UUID List: Not Supported 00:30:56.162 Multi-Domain Subsystem: Not Supported 00:30:56.162 Fixed Capacity Management: Not Supported 00:30:56.162 Variable Capacity Management: Not Supported 00:30:56.162 Delete Endurance Group: Not Supported 00:30:56.162 Delete NVM Set: Not Supported 00:30:56.162 Extended LBA Formats Supported: Not Supported 00:30:56.162 Flexible Data Placement Supported: Not Supported 00:30:56.162 00:30:56.162 Controller Memory Buffer Support 00:30:56.162 ================================ 00:30:56.162 Supported: No 00:30:56.162 00:30:56.162 Persistent Memory Region Support 00:30:56.162 ================================ 00:30:56.162 Supported: No 00:30:56.162 00:30:56.162 Admin Command Set Attributes 00:30:56.162 ============================ 00:30:56.162 Security Send/Receive: Not Supported 00:30:56.162 Format NVM: Not Supported 00:30:56.162 Firmware Activate/Download: Not Supported 00:30:56.162 Namespace Management: Not Supported 00:30:56.162 Device Self-Test: Not Supported 00:30:56.162 Directives: Not Supported 00:30:56.162 NVMe-MI: Not Supported 00:30:56.162 Virtualization Management: Not Supported 00:30:56.162 Doorbell Buffer Config: Not Supported 00:30:56.162 Get LBA Status Capability: Not Supported 00:30:56.162 Command & Feature Lockdown Capability: Not Supported 00:30:56.162 Abort Command Limit: 1 00:30:56.162 Async Event Request Limit: 1 00:30:56.162 Number of Firmware Slots: N/A 00:30:56.162 Firmware Slot 1 Read-Only: N/A 00:30:56.162 Firmware Activation Without Reset: N/A 00:30:56.162 Multiple Update Detection Support: N/A 00:30:56.162 Firmware Update Granularity: No Information Provided 00:30:56.162 Per-Namespace SMART Log: No 00:30:56.162 Asymmetric Namespace Access Log Page: Not Supported 00:30:56.162 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:56.162 Command Effects Log Page: Not Supported 00:30:56.162 Get Log Page Extended Data: Supported 00:30:56.162 Telemetry Log Pages: Not Supported 00:30:56.162 Persistent Event Log Pages: Not Supported 00:30:56.162 Supported Log Pages Log Page: May Support 00:30:56.162 Commands Supported & Effects Log Page: Not Supported 00:30:56.162 Feature Identifiers & Effects Log Page:May Support 00:30:56.162 NVMe-MI Commands & Effects Log Page: May Support 00:30:56.162 Data Area 4 for Telemetry Log: Not Supported 00:30:56.162 Error Log Page Entries Supported: 1 00:30:56.162 Keep Alive: Not Supported 00:30:56.162 00:30:56.162 NVM Command Set Attributes 00:30:56.162 ========================== 00:30:56.162 Submission Queue Entry Size 00:30:56.162 Max: 1 00:30:56.162 Min: 1 00:30:56.162 Completion Queue Entry Size 00:30:56.162 Max: 1 00:30:56.162 Min: 1 00:30:56.162 Number of Namespaces: 0 00:30:56.162 Compare Command: Not Supported 00:30:56.162 Write Uncorrectable Command: Not Supported 00:30:56.162 Dataset Management Command: Not Supported 00:30:56.162 Write Zeroes Command: Not Supported 00:30:56.162 Set Features Save Field: Not Supported 00:30:56.162 Reservations: Not Supported 00:30:56.162 Timestamp: Not Supported 00:30:56.162 Copy: Not Supported 00:30:56.162 Volatile Write Cache: Not Present 00:30:56.162 Atomic Write Unit (Normal): 1 00:30:56.162 Atomic Write Unit (PFail): 1 00:30:56.162 Atomic Compare & Write Unit: 1 00:30:56.162 Fused Compare & Write: Not Supported 00:30:56.162 Scatter-Gather List 00:30:56.162 SGL Command Set: Supported 00:30:56.162 SGL Keyed: Supported 00:30:56.162 SGL Bit Bucket Descriptor: Not Supported 00:30:56.162 SGL Metadata Pointer: Not Supported 00:30:56.162 Oversized SGL: Not Supported 00:30:56.162 SGL Metadata Address: Not Supported 00:30:56.162 SGL Offset: Supported 00:30:56.162 Transport SGL Data Block: Not Supported 00:30:56.162 Replay Protected Memory Block: Not Supported 00:30:56.162 00:30:56.162 Firmware Slot Information 00:30:56.162 ========================= 00:30:56.162 Active slot: 0 00:30:56.162 00:30:56.162 00:30:56.162 Error Log 00:30:56.162 ========= 00:30:56.162 00:30:56.162 Active Namespaces 00:30:56.162 ================= 00:30:56.162 Discovery Log Page 00:30:56.162 ================== 00:30:56.162 Generation Counter: 2 00:30:56.162 Number of Records: 2 00:30:56.162 Record Format: 0 00:30:56.162 00:30:56.162 Discovery Log Entry 0 00:30:56.162 ---------------------- 00:30:56.162 Transport Type: 1 (RDMA) 00:30:56.162 Address Family: 1 (IPv4) 00:30:56.162 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:56.162 Entry Flags: 00:30:56.162 Duplicate Returned Information: 0 00:30:56.162 Explicit Persistent Connection Support for Discovery: 0 00:30:56.162 Transport Requirements: 00:30:56.162 Secure Channel: Not Specified 00:30:56.162 Port ID: 1 (0x0001) 00:30:56.162 Controller ID: 65535 (0xffff) 00:30:56.162 Admin Max SQ Size: 32 00:30:56.162 Transport Service Identifier: 4420 00:30:56.162 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:56.162 Transport Address: 192.168.100.8 00:30:56.162 Transport Specific Address Subtype - RDMA 00:30:56.162 RDMA QP Service Type: 1 (Reliable Connected) 00:30:56.162 RDMA Provider Type: 1 (No provider specified) 00:30:56.162 RDMA CM Service: 1 (RDMA_CM) 00:30:56.162 Discovery Log Entry 1 00:30:56.162 ---------------------- 00:30:56.162 Transport Type: 1 (RDMA) 00:30:56.162 Address Family: 1 (IPv4) 00:30:56.162 Subsystem Type: 2 (NVM Subsystem) 00:30:56.162 Entry Flags: 00:30:56.162 Duplicate Returned Information: 0 00:30:56.162 Explicit Persistent Connection Support for Discovery: 0 00:30:56.162 Transport Requirements: 00:30:56.162 Secure Channel: Not Specified 00:30:56.162 Port ID: 1 (0x0001) 00:30:56.162 Controller ID: 65535 (0xffff) 00:30:56.162 Admin Max SQ Size: 32 00:30:56.162 Transport Service Identifier: 4420 00:30:56.162 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:56.162 Transport Address: 192.168.100.8 00:30:56.162 Transport Specific Address Subtype - RDMA 00:30:56.162 RDMA QP Service Type: 1 (Reliable Connected) 00:30:56.162 RDMA Provider Type: 1 (No provider specified) 00:30:56.162 RDMA CM Service: 1 (RDMA_CM) 00:30:56.162 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.162 get_feature(0x01) failed 00:30:56.162 get_feature(0x02) failed 00:30:56.162 get_feature(0x04) failed 00:30:56.162 ===================================================== 00:30:56.162 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:30:56.162 ===================================================== 00:30:56.162 Controller Capabilities/Features 00:30:56.162 ================================ 00:30:56.162 Vendor ID: 0000 00:30:56.162 Subsystem Vendor ID: 0000 00:30:56.162 Serial Number: 56de717f98cdff8dd365 00:30:56.162 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:56.162 Firmware Version: 6.8.9-20 00:30:56.162 Recommended Arb Burst: 6 00:30:56.162 IEEE OUI Identifier: 00 00 00 00:30:56.162 Multi-path I/O 00:30:56.162 May have multiple subsystem ports: Yes 00:30:56.162 May have multiple controllers: Yes 00:30:56.162 Associated with SR-IOV VF: No 00:30:56.162 Max Data Transfer Size: 1048576 00:30:56.162 Max Number of Namespaces: 1024 00:30:56.162 Max Number of I/O Queues: 128 00:30:56.162 NVMe Specification Version (VS): 1.3 00:30:56.162 NVMe Specification Version (Identify): 1.3 00:30:56.162 Maximum Queue Entries: 128 00:30:56.162 Contiguous Queues Required: No 00:30:56.162 Arbitration Mechanisms Supported 00:30:56.162 Weighted Round Robin: Not Supported 00:30:56.162 Vendor Specific: Not Supported 00:30:56.162 Reset Timeout: 7500 ms 00:30:56.162 Doorbell Stride: 4 bytes 00:30:56.162 NVM Subsystem Reset: Not Supported 00:30:56.162 Command Sets Supported 00:30:56.162 NVM Command Set: Supported 00:30:56.162 Boot Partition: Not Supported 00:30:56.162 Memory Page Size Minimum: 4096 bytes 00:30:56.162 Memory Page Size Maximum: 4096 bytes 00:30:56.162 Persistent Memory Region: Not Supported 00:30:56.162 Optional Asynchronous Events Supported 00:30:56.162 Namespace Attribute Notices: Supported 00:30:56.162 Firmware Activation Notices: Not Supported 00:30:56.162 ANA Change Notices: Supported 00:30:56.162 PLE Aggregate Log Change Notices: Not Supported 00:30:56.162 LBA Status Info Alert Notices: Not Supported 00:30:56.162 EGE Aggregate Log Change Notices: Not Supported 00:30:56.162 Normal NVM Subsystem Shutdown event: Not Supported 00:30:56.162 Zone Descriptor Change Notices: Not Supported 00:30:56.162 Discovery Log Change Notices: Not Supported 00:30:56.162 Controller Attributes 00:30:56.162 128-bit Host Identifier: Supported 00:30:56.162 Non-Operational Permissive Mode: Not Supported 00:30:56.162 NVM Sets: Not Supported 00:30:56.162 Read Recovery Levels: Not Supported 00:30:56.162 Endurance Groups: Not Supported 00:30:56.162 Predictable Latency Mode: Not Supported 00:30:56.162 Traffic Based Keep ALive: Supported 00:30:56.162 Namespace Granularity: Not Supported 00:30:56.162 SQ Associations: Not Supported 00:30:56.162 UUID List: Not Supported 00:30:56.162 Multi-Domain Subsystem: Not Supported 00:30:56.162 Fixed Capacity Management: Not Supported 00:30:56.162 Variable Capacity Management: Not Supported 00:30:56.162 Delete Endurance Group: Not Supported 00:30:56.162 Delete NVM Set: Not Supported 00:30:56.162 Extended LBA Formats Supported: Not Supported 00:30:56.162 Flexible Data Placement Supported: Not Supported 00:30:56.162 00:30:56.162 Controller Memory Buffer Support 00:30:56.162 ================================ 00:30:56.162 Supported: No 00:30:56.162 00:30:56.162 Persistent Memory Region Support 00:30:56.162 ================================ 00:30:56.162 Supported: No 00:30:56.162 00:30:56.162 Admin Command Set Attributes 00:30:56.162 ============================ 00:30:56.162 Security Send/Receive: Not Supported 00:30:56.162 Format NVM: Not Supported 00:30:56.162 Firmware Activate/Download: Not Supported 00:30:56.162 Namespace Management: Not Supported 00:30:56.162 Device Self-Test: Not Supported 00:30:56.162 Directives: Not Supported 00:30:56.162 NVMe-MI: Not Supported 00:30:56.162 Virtualization Management: Not Supported 00:30:56.162 Doorbell Buffer Config: Not Supported 00:30:56.162 Get LBA Status Capability: Not Supported 00:30:56.162 Command & Feature Lockdown Capability: Not Supported 00:30:56.162 Abort Command Limit: 4 00:30:56.162 Async Event Request Limit: 4 00:30:56.162 Number of Firmware Slots: N/A 00:30:56.162 Firmware Slot 1 Read-Only: N/A 00:30:56.162 Firmware Activation Without Reset: N/A 00:30:56.162 Multiple Update Detection Support: N/A 00:30:56.162 Firmware Update Granularity: No Information Provided 00:30:56.162 Per-Namespace SMART Log: Yes 00:30:56.162 Asymmetric Namespace Access Log Page: Supported 00:30:56.162 ANA Transition Time : 10 sec 00:30:56.162 00:30:56.162 Asymmetric Namespace Access Capabilities 00:30:56.163 ANA Optimized State : Supported 00:30:56.163 ANA Non-Optimized State : Supported 00:30:56.163 ANA Inaccessible State : Supported 00:30:56.163 ANA Persistent Loss State : Supported 00:30:56.163 ANA Change State : Supported 00:30:56.163 ANAGRPID is not changed : No 00:30:56.163 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:56.163 00:30:56.163 ANA Group Identifier Maximum : 128 00:30:56.163 Number of ANA Group Identifiers : 128 00:30:56.163 Max Number of Allowed Namespaces : 1024 00:30:56.163 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:56.163 Command Effects Log Page: Supported 00:30:56.163 Get Log Page Extended Data: Supported 00:30:56.163 Telemetry Log Pages: Not Supported 00:30:56.163 Persistent Event Log Pages: Not Supported 00:30:56.163 Supported Log Pages Log Page: May Support 00:30:56.163 Commands Supported & Effects Log Page: Not Supported 00:30:56.163 Feature Identifiers & Effects Log Page:May Support 00:30:56.163 NVMe-MI Commands & Effects Log Page: May Support 00:30:56.163 Data Area 4 for Telemetry Log: Not Supported 00:30:56.163 Error Log Page Entries Supported: 128 00:30:56.163 Keep Alive: Supported 00:30:56.163 Keep Alive Granularity: 1000 ms 00:30:56.163 00:30:56.163 NVM Command Set Attributes 00:30:56.163 ========================== 00:30:56.163 Submission Queue Entry Size 00:30:56.163 Max: 64 00:30:56.163 Min: 64 00:30:56.163 Completion Queue Entry Size 00:30:56.163 Max: 16 00:30:56.163 Min: 16 00:30:56.163 Number of Namespaces: 1024 00:30:56.163 Compare Command: Not Supported 00:30:56.163 Write Uncorrectable Command: Not Supported 00:30:56.163 Dataset Management Command: Supported 00:30:56.163 Write Zeroes Command: Supported 00:30:56.163 Set Features Save Field: Not Supported 00:30:56.163 Reservations: Not Supported 00:30:56.163 Timestamp: Not Supported 00:30:56.163 Copy: Not Supported 00:30:56.163 Volatile Write Cache: Present 00:30:56.163 Atomic Write Unit (Normal): 1 00:30:56.163 Atomic Write Unit (PFail): 1 00:30:56.163 Atomic Compare & Write Unit: 1 00:30:56.163 Fused Compare & Write: Not Supported 00:30:56.163 Scatter-Gather List 00:30:56.163 SGL Command Set: Supported 00:30:56.163 SGL Keyed: Supported 00:30:56.163 SGL Bit Bucket Descriptor: Not Supported 00:30:56.163 SGL Metadata Pointer: Not Supported 00:30:56.163 Oversized SGL: Not Supported 00:30:56.163 SGL Metadata Address: Not Supported 00:30:56.163 SGL Offset: Supported 00:30:56.163 Transport SGL Data Block: Not Supported 00:30:56.163 Replay Protected Memory Block: Not Supported 00:30:56.163 00:30:56.163 Firmware Slot Information 00:30:56.163 ========================= 00:30:56.163 Active slot: 0 00:30:56.163 00:30:56.163 Asymmetric Namespace Access 00:30:56.163 =========================== 00:30:56.163 Change Count : 0 00:30:56.163 Number of ANA Group Descriptors : 1 00:30:56.163 ANA Group Descriptor : 0 00:30:56.163 ANA Group ID : 1 00:30:56.163 Number of NSID Values : 1 00:30:56.163 Change Count : 0 00:30:56.163 ANA State : 1 00:30:56.163 Namespace Identifier : 1 00:30:56.163 00:30:56.163 Commands Supported and Effects 00:30:56.163 ============================== 00:30:56.163 Admin Commands 00:30:56.163 -------------- 00:30:56.163 Get Log Page (02h): Supported 00:30:56.163 Identify (06h): Supported 00:30:56.163 Abort (08h): Supported 00:30:56.163 Set Features (09h): Supported 00:30:56.163 Get Features (0Ah): Supported 00:30:56.163 Asynchronous Event Request (0Ch): Supported 00:30:56.163 Keep Alive (18h): Supported 00:30:56.163 I/O Commands 00:30:56.163 ------------ 00:30:56.163 Flush (00h): Supported 00:30:56.163 Write (01h): Supported LBA-Change 00:30:56.163 Read (02h): Supported 00:30:56.163 Write Zeroes (08h): Supported LBA-Change 00:30:56.163 Dataset Management (09h): Supported 00:30:56.163 00:30:56.163 Error Log 00:30:56.163 ========= 00:30:56.163 Entry: 0 00:30:56.163 Error Count: 0x3 00:30:56.163 Submission Queue Id: 0x0 00:30:56.163 Command Id: 0x5 00:30:56.163 Phase Bit: 0 00:30:56.163 Status Code: 0x2 00:30:56.163 Status Code Type: 0x0 00:30:56.163 Do Not Retry: 1 00:30:56.163 Error Location: 0x28 00:30:56.163 LBA: 0x0 00:30:56.163 Namespace: 0x0 00:30:56.163 Vendor Log Page: 0x0 00:30:56.163 ----------- 00:30:56.163 Entry: 1 00:30:56.163 Error Count: 0x2 00:30:56.163 Submission Queue Id: 0x0 00:30:56.163 Command Id: 0x5 00:30:56.163 Phase Bit: 0 00:30:56.163 Status Code: 0x2 00:30:56.163 Status Code Type: 0x0 00:30:56.163 Do Not Retry: 1 00:30:56.163 Error Location: 0x28 00:30:56.163 LBA: 0x0 00:30:56.163 Namespace: 0x0 00:30:56.163 Vendor Log Page: 0x0 00:30:56.163 ----------- 00:30:56.163 Entry: 2 00:30:56.163 Error Count: 0x1 00:30:56.163 Submission Queue Id: 0x0 00:30:56.163 Command Id: 0x0 00:30:56.163 Phase Bit: 0 00:30:56.163 Status Code: 0x2 00:30:56.163 Status Code Type: 0x0 00:30:56.163 Do Not Retry: 1 00:30:56.163 Error Location: 0x28 00:30:56.163 LBA: 0x0 00:30:56.163 Namespace: 0x0 00:30:56.163 Vendor Log Page: 0x0 00:30:56.163 00:30:56.163 Number of Queues 00:30:56.163 ================ 00:30:56.163 Number of I/O Submission Queues: 128 00:30:56.163 Number of I/O Completion Queues: 128 00:30:56.163 00:30:56.163 ZNS Specific Controller Data 00:30:56.163 ============================ 00:30:56.163 Zone Append Size Limit: 0 00:30:56.163 00:30:56.163 00:30:56.163 Active Namespaces 00:30:56.163 ================= 00:30:56.163 get_feature(0x05) failed 00:30:56.163 Namespace ID:1 00:30:56.163 Command Set Identifier: NVM (00h) 00:30:56.163 Deallocate: Supported 00:30:56.163 Deallocated/Unwritten Error: Not Supported 00:30:56.163 Deallocated Read Value: Unknown 00:30:56.163 Deallocate in Write Zeroes: Not Supported 00:30:56.163 Deallocated Guard Field: 0xFFFF 00:30:56.163 Flush: Supported 00:30:56.163 Reservation: Not Supported 00:30:56.163 Namespace Sharing Capabilities: Multiple Controllers 00:30:56.163 Size (in LBAs): 3907029168 (1863GiB) 00:30:56.163 Capacity (in LBAs): 3907029168 (1863GiB) 00:30:56.163 Utilization (in LBAs): 3907029168 (1863GiB) 00:30:56.163 UUID: 407a0935-0562-4f3d-b5c3-fdad5e5cb415 00:30:56.163 Thin Provisioning: Not Supported 00:30:56.163 Per-NS Atomic Units: Yes 00:30:56.163 Atomic Boundary Size (Normal): 0 00:30:56.163 Atomic Boundary Size (PFail): 0 00:30:56.163 Atomic Boundary Offset: 0 00:30:56.163 NGUID/EUI64 Never Reused: No 00:30:56.163 ANA group ID: 1 00:30:56.163 Namespace Write Protected: No 00:30:56.163 Number of LBA Formats: 1 00:30:56.163 Current LBA Format: LBA Format #00 00:30:56.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:56.163 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:56.163 rmmod nvme_rdma 00:30:56.163 rmmod nvme_fabrics 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:56.163 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_rdma nvmet 00:30:56.421 16:18:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:30:59.703 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:59.703 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:01.608 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:01.608 00:31:01.608 real 0m17.676s 00:31:01.608 user 0m4.773s 00:31:01.608 sys 0m10.233s 00:31:01.608 16:18:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.608 16:18:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.608 ************************************ 00:31:01.608 END TEST nvmf_identify_kernel_target 00:31:01.608 ************************************ 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.889 ************************************ 00:31:01.889 START TEST nvmf_auth_host 00:31:01.889 ************************************ 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:01.889 * Looking for test storage... 00:31:01.889 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.889 --rc genhtml_branch_coverage=1 00:31:01.889 --rc genhtml_function_coverage=1 00:31:01.889 --rc genhtml_legend=1 00:31:01.889 --rc geninfo_all_blocks=1 00:31:01.889 --rc geninfo_unexecuted_blocks=1 00:31:01.889 00:31:01.889 ' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.889 --rc genhtml_branch_coverage=1 00:31:01.889 --rc genhtml_function_coverage=1 00:31:01.889 --rc genhtml_legend=1 00:31:01.889 --rc geninfo_all_blocks=1 00:31:01.889 --rc geninfo_unexecuted_blocks=1 00:31:01.889 00:31:01.889 ' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.889 --rc genhtml_branch_coverage=1 00:31:01.889 --rc genhtml_function_coverage=1 00:31:01.889 --rc genhtml_legend=1 00:31:01.889 --rc geninfo_all_blocks=1 00:31:01.889 --rc geninfo_unexecuted_blocks=1 00:31:01.889 00:31:01.889 ' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.889 --rc genhtml_branch_coverage=1 00:31:01.889 --rc genhtml_function_coverage=1 00:31:01.889 --rc genhtml_legend=1 00:31:01.889 --rc geninfo_all_blocks=1 00:31:01.889 --rc geninfo_unexecuted_blocks=1 00:31:01.889 00:31:01.889 ' 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.889 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.169 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.169 16:18:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.741 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.742 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.742 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:08.742 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:31:08.742 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:31:08.742 16:18:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:08.742 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:08.742 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:08.742 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:08.742 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # rdma_device_init 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # allocate_nic_ips 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:08.742 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.742 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:08.742 altname enp217s0f0np0 00:31:08.742 altname ens818f0np0 00:31:08.742 inet 192.168.100.8/24 scope global mlx_0_0 00:31:08.742 valid_lft forever preferred_lft forever 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:08.742 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.742 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:08.742 altname enp217s0f1np1 00:31:08.742 altname ens818f1np1 00:31:08.742 inet 192.168.100.9/24 scope global mlx_0_1 00:31:08.742 valid_lft forever preferred_lft forever 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:31:08.742 192.168.100.9' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:31:08.742 192.168.100.9' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # head -n 1 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:31:08.742 192.168.100.9' 00:31:08.742 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # tail -n +2 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # head -n 1 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=2992315 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 2992315 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2992315 ']' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.743 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=017900cfdc3272d160484ed8731229be 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.ZpY 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 017900cfdc3272d160484ed8731229be 0 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 017900cfdc3272d160484ed8731229be 0 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=017900cfdc3272d160484ed8731229be 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:09.001 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.ZpY 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.ZpY 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZpY 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5870e6af8e7e619a1ff39d44d8201394228bc03f136485ddb2e7e6d0eeb10ad0 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Usu 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5870e6af8e7e619a1ff39d44d8201394228bc03f136485ddb2e7e6d0eeb10ad0 3 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5870e6af8e7e619a1ff39d44d8201394228bc03f136485ddb2e7e6d0eeb10ad0 3 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5870e6af8e7e619a1ff39d44d8201394228bc03f136485ddb2e7e6d0eeb10ad0 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Usu 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Usu 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Usu 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=dadf683e2062e24257173d4b7571af6c5981fd4431aef3fd 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.YaS 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key dadf683e2062e24257173d4b7571af6c5981fd4431aef3fd 0 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 dadf683e2062e24257173d4b7571af6c5981fd4431aef3fd 0 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=dadf683e2062e24257173d4b7571af6c5981fd4431aef3fd 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.260 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.YaS 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.YaS 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YaS 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f10c04239d1ff51d9082d4a6f777a3d7de9aa152512ff90d 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.I9R 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f10c04239d1ff51d9082d4a6f777a3d7de9aa152512ff90d 2 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f10c04239d1ff51d9082d4a6f777a3d7de9aa152512ff90d 2 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f10c04239d1ff51d9082d4a6f777a3d7de9aa152512ff90d 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.I9R 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.I9R 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.I9R 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.261 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ca674904f1d895a6204f22e18f9f7da6 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Gnl 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ca674904f1d895a6204f22e18f9f7da6 1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ca674904f1d895a6204f22e18f9f7da6 1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ca674904f1d895a6204f22e18f9f7da6 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Gnl 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Gnl 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Gnl 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=56648e64d21f0211e8f9084b986816b7 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.s1z 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 56648e64d21f0211e8f9084b986816b7 1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 56648e64d21f0211e8f9084b986816b7 1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=56648e64d21f0211e8f9084b986816b7 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.s1z 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.s1z 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.s1z 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=955f174dc7b8b71feb6e5678a294d5b0a43471f44ce7501a 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.I85 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 955f174dc7b8b71feb6e5678a294d5b0a43471f44ce7501a 2 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 955f174dc7b8b71feb6e5678a294d5b0a43471f44ce7501a 2 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=955f174dc7b8b71feb6e5678a294d5b0a43471f44ce7501a 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:31:09.519 16:18:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.519 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.I85 00:31:09.519 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.I85 00:31:09.519 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.I85 00:31:09.519 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:09.519 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=a670c8259fde1392c0ff6a2a46fab979 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.A8N 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key a670c8259fde1392c0ff6a2a46fab979 0 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 a670c8259fde1392c0ff6a2a46fab979 0 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=a670c8259fde1392c0ff6a2a46fab979 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.A8N 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.A8N 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.A8N 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:09.520 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6cdff2421ca0382f1764587937a509d5ba9127fcdfa7039f94812d5d3c6f2340 00:31:09.777 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:31:09.777 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.H4Q 00:31:09.777 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6cdff2421ca0382f1764587937a509d5ba9127fcdfa7039f94812d5d3c6f2340 3 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6cdff2421ca0382f1764587937a509d5ba9127fcdfa7039f94812d5d3c6f2340 3 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6cdff2421ca0382f1764587937a509d5ba9127fcdfa7039f94812d5d3c6f2340 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.H4Q 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.H4Q 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.H4Q 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2992315 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2992315 ']' 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZpY 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.778 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Usu ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Usu 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YaS 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.I9R ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I9R 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Gnl 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.s1z ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s1z 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.I85 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.A8N ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A8N 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.H4Q 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:10.036 16:18:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:13.319 Waiting for block devices as requested 00:31:13.320 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.320 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.320 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:13.320 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:13.577 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:13.577 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:13.577 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:13.835 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:13.835 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.835 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.835 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:14.094 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:14.094 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:14.094 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:14.352 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:14.352 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:14.352 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:15.285 No valid GPT data, bailing 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 192.168.100.8 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo rdma 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:15.285 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:31:15.543 00:31:15.543 Discovery Log Number of Records 2, Generation counter 2 00:31:15.543 =====Discovery Log Entry 0====== 00:31:15.543 trtype: rdma 00:31:15.543 adrfam: ipv4 00:31:15.543 subtype: current discovery subsystem 00:31:15.543 treq: not specified, sq flow control disable supported 00:31:15.543 portid: 1 00:31:15.543 trsvcid: 4420 00:31:15.543 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:15.543 traddr: 192.168.100.8 00:31:15.543 eflags: none 00:31:15.543 rdma_prtype: not specified 00:31:15.543 rdma_qptype: connected 00:31:15.543 rdma_cms: rdma-cm 00:31:15.543 rdma_pkey: 0x0000 00:31:15.543 =====Discovery Log Entry 1====== 00:31:15.543 trtype: rdma 00:31:15.543 adrfam: ipv4 00:31:15.543 subtype: nvme subsystem 00:31:15.543 treq: not specified, sq flow control disable supported 00:31:15.543 portid: 1 00:31:15.543 trsvcid: 4420 00:31:15.543 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:15.544 traddr: 192.168.100.8 00:31:15.544 eflags: none 00:31:15.544 rdma_prtype: not specified 00:31:15.544 rdma_qptype: connected 00:31:15.544 rdma_cms: rdma-cm 00:31:15.544 rdma_pkey: 0x0000 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.544 16:18:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.802 nvme0n1 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:15.802 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.803 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.061 nvme0n1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.061 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.320 nvme0n1 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.320 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.579 nvme0n1 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.579 16:18:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.579 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.838 nvme0n1 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.838 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.096 nvme0n1 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.096 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.355 nvme0n1 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.355 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.614 16:18:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.873 nvme0n1 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.873 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.131 nvme0n1 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.131 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.132 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.132 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:18.132 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.132 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.390 nvme0n1 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.390 16:18:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.647 nvme0n1 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:18.647 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.648 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.213 nvme0n1 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.213 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.214 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.472 nvme0n1 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.472 16:18:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.730 nvme0n1 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.730 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.989 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.247 nvme0n1 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.247 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.506 nvme0n1 00:31:20.506 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.506 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.506 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.506 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.506 16:18:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.506 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.072 nvme0n1 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.072 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.073 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.639 nvme0n1 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.639 16:18:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.639 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.899 nvme0n1 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.899 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.157 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.415 nvme0n1 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.415 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.673 16:18:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.674 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.674 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.674 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:22.674 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.674 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.932 nvme0n1 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.932 16:18:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.866 nvme0n1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.866 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 nvme0n1 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 16:18:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.998 nvme0n1 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.998 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.999 16:18:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.564 nvme0n1 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.564 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.822 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.388 nvme0n1 00:31:26.388 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.388 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.388 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.388 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.389 16:18:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.647 nvme0n1 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.647 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.648 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.905 nvme0n1 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.905 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 nvme0n1 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.163 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.164 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:27.164 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.164 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.421 nvme0n1 00:31:27.421 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.422 16:18:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.680 nvme0n1 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.680 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.937 nvme0n1 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.937 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.193 nvme0n1 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.193 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.451 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.452 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:28.452 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:28.452 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.452 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.452 16:18:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.710 nvme0n1 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.710 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.969 nvme0n1 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.969 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.227 nvme0n1 00:31:29.227 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.227 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.227 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.227 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.227 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.228 16:18:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.486 nvme0n1 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.486 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.743 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.744 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.001 nvme0n1 00:31:30.001 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.001 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.002 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.260 nvme0n1 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.260 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.518 16:18:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.776 nvme0n1 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.776 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.777 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.035 nvme0n1 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.035 16:18:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.602 nvme0n1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.602 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.169 nvme0n1 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.169 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.427 nvme0n1 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.427 16:19:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.685 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 nvme0n1 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.943 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.201 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.459 nvme0n1 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.459 16:19:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.459 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.717 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 nvme0n1 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:34.283 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:34.284 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:34.284 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.284 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.284 16:19:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.920 nvme0n1 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.920 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.521 nvme0n1 00:31:35.521 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.521 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.521 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.521 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.521 16:19:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.521 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.456 nvme0n1 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:36.456 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.457 16:19:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.022 nvme0n1 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.022 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.023 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.281 nvme0n1 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.281 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.282 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 nvme0n1 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.540 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.541 16:19:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.799 nvme0n1 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:37.799 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.800 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.058 nvme0n1 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.058 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.059 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:38.059 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:38.059 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.059 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.059 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.317 nvme0n1 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.317 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.318 16:19:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.577 nvme0n1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.577 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.836 nvme0n1 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.836 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.094 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.095 nvme0n1 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.095 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.353 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.612 nvme0n1 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.612 16:19:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.871 nvme0n1 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.871 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.130 nvme0n1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.130 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.697 nvme0n1 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.697 16:19:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.697 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.956 nvme0n1 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.956 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.214 nvme0n1 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.214 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.215 16:19:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.782 nvme0n1 00:31:41.782 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.782 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.783 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.041 nvme0n1 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.041 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.300 16:19:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.559 nvme0n1 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.559 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.126 nvme0n1 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.126 16:19:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.693 nvme0n1 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.693 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.950 nvme0n1 00:31:43.950 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.950 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.951 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.951 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.951 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.208 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDE3OTAwY2ZkYzMyNzJkMTYwNDg0ZWQ4NzMxMjI5YmWQTkF9: 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: ]] 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTg3MGU2YWY4ZTdlNjE5YTFmZjM5ZDQ0ZDgyMDEzOTQyMjhiYzAzZjEzNjQ4NWRkYjJlN2U2ZDBlZWIxMGFkMJqOrwU=: 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.209 16:19:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.776 nvme0n1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.776 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.342 nvme0n1 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.342 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.600 16:19:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.167 nvme0n1 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTU1ZjE3NGRjN2I4YjcxZmViNmU1Njc4YTI5NGQ1YjBhNDM0NzFmNDRjZTc1MDFhGJ/CFA==: 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY3MGM4MjU5ZmRlMTM5MmMwZmY2YTJhNDZmYWI5NzkjPpzK: 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.167 16:19:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.737 nvme0n1 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.737 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmNkZmYyNDIxY2EwMzgyZjE3NjQ1ODc5MzdhNTA5ZDViYTkxMjdmY2RmYTcwMzlmOTQ4MTJkNWQzYzZmMjM0MNUYmoc=: 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.738 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.673 nvme0n1 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:47.673 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.674 16:19:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.674 request: 00:31:47.674 { 00:31:47.674 "name": "nvme0", 00:31:47.674 "trtype": "rdma", 00:31:47.674 "traddr": "192.168.100.8", 00:31:47.674 "adrfam": "ipv4", 00:31:47.674 "trsvcid": "4420", 00:31:47.674 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:47.674 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:47.674 "prchk_reftag": false, 00:31:47.674 "prchk_guard": false, 00:31:47.674 "hdgst": false, 00:31:47.674 "ddgst": false, 00:31:47.674 "allow_unrecognized_csi": false, 00:31:47.674 "method": "bdev_nvme_attach_controller", 00:31:47.674 "req_id": 1 00:31:47.674 } 00:31:47.674 Got JSON-RPC error response 00:31:47.674 response: 00:31:47.674 { 00:31:47.674 "code": -5, 00:31:47.674 "message": "Input/output error" 00:31:47.674 } 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.674 request: 00:31:47.674 { 00:31:47.674 "name": "nvme0", 00:31:47.674 "trtype": "rdma", 00:31:47.674 "traddr": "192.168.100.8", 00:31:47.674 "adrfam": "ipv4", 00:31:47.674 "trsvcid": "4420", 00:31:47.674 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:47.674 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:47.674 "prchk_reftag": false, 00:31:47.674 "prchk_guard": false, 00:31:47.674 "hdgst": false, 00:31:47.674 "ddgst": false, 00:31:47.674 "dhchap_key": "key2", 00:31:47.674 "allow_unrecognized_csi": false, 00:31:47.674 "method": "bdev_nvme_attach_controller", 00:31:47.674 "req_id": 1 00:31:47.674 } 00:31:47.674 Got JSON-RPC error response 00:31:47.674 response: 00:31:47.674 { 00:31:47.674 "code": -5, 00:31:47.674 "message": "Input/output error" 00:31:47.674 } 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.674 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.933 request: 00:31:47.933 { 00:31:47.933 "name": "nvme0", 00:31:47.933 "trtype": "rdma", 00:31:47.933 "traddr": "192.168.100.8", 00:31:47.933 "adrfam": "ipv4", 00:31:47.933 "trsvcid": "4420", 00:31:47.933 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:47.933 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:47.933 "prchk_reftag": false, 00:31:47.933 "prchk_guard": false, 00:31:47.933 "hdgst": false, 00:31:47.933 "ddgst": false, 00:31:47.933 "dhchap_key": "key1", 00:31:47.933 "dhchap_ctrlr_key": "ckey2", 00:31:47.933 "allow_unrecognized_csi": false, 00:31:47.933 "method": "bdev_nvme_attach_controller", 00:31:47.933 "req_id": 1 00:31:47.933 } 00:31:47.933 Got JSON-RPC error response 00:31:47.933 response: 00:31:47.933 { 00:31:47.933 "code": -5, 00:31:47.933 "message": "Input/output error" 00:31:47.933 } 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.933 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.192 nvme0n1 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.192 request: 00:31:48.192 { 00:31:48.192 "name": "nvme0", 00:31:48.192 "dhchap_key": "key1", 00:31:48.192 "dhchap_ctrlr_key": "ckey2", 00:31:48.192 "method": "bdev_nvme_set_keys", 00:31:48.192 "req_id": 1 00:31:48.192 } 00:31:48.192 Got JSON-RPC error response 00:31:48.192 response: 00:31:48.192 { 00:31:48.192 "code": -13, 00:31:48.192 "message": "Permission denied" 00:31:48.192 } 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.192 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.450 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:48.450 16:19:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:49.384 16:19:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFkZjY4M2UyMDYyZTI0MjU3MTczZDRiNzU3MWFmNmM1OTgxZmQ0NDMxYWVmM2ZkkYhUxw==: 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: ]] 00:31:50.319 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjEwYzA0MjM5ZDFmZjUxZDkwODJkNGE2Zjc3N2EzZDdkZTlhYTE1MjUxMmZmOTBkdRXCEA==: 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.577 16:19:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.577 nvme0n1 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.577 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E2NzQ5MDRmMWQ4OTVhNjIwNGYyMmUxOGY5ZjdkYTZgRhLC: 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: ]] 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY2NDhlNjRkMjFmMDIxMWU4ZjkwODRiOTg2ODE2YjcMYnfX: 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.578 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.836 request: 00:31:50.836 { 00:31:50.836 "name": "nvme0", 00:31:50.836 "dhchap_key": "key2", 00:31:50.836 "dhchap_ctrlr_key": "ckey1", 00:31:50.836 "method": "bdev_nvme_set_keys", 00:31:50.836 "req_id": 1 00:31:50.836 } 00:31:50.836 Got JSON-RPC error response 00:31:50.836 response: 00:31:50.836 { 00:31:50.836 "code": -13, 00:31:50.836 "message": "Permission denied" 00:31:50.836 } 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:50.836 16:19:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:51.771 16:19:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:52.706 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.706 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:52.706 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.706 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.706 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:52.964 rmmod nvme_rdma 00:31:52.964 rmmod nvme_fabrics 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 2992315 ']' 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 2992315 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2992315 ']' 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2992315 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:52.964 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2992315 00:31:52.965 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:52.965 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:52.965 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2992315' 00:31:52.965 killing process with pid 2992315 00:31:52.965 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2992315 00:31:52.965 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2992315 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:31:53.223 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_rdma nvmet 00:31:53.224 16:19:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:56.511 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:56.511 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:56.768 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:56.768 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:56.768 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:58.668 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:58.668 16:19:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZpY /tmp/spdk.key-null.YaS /tmp/spdk.key-sha256.Gnl /tmp/spdk.key-sha384.I85 /tmp/spdk.key-sha512.H4Q /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:31:58.668 16:19:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:01.951 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:01.951 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:01.951 00:32:01.951 real 0m59.840s 00:32:01.951 user 0m53.090s 00:32:01.951 sys 0m15.267s 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.951 ************************************ 00:32:01.951 END TEST nvmf_auth_host 00:32:01.951 ************************************ 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:01.951 16:19:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 ************************************ 00:32:01.952 START TEST nvmf_bdevperf 00:32:01.952 ************************************ 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:01.952 * Looking for test storage... 00:32:01.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:01.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.952 --rc genhtml_branch_coverage=1 00:32:01.952 --rc genhtml_function_coverage=1 00:32:01.952 --rc genhtml_legend=1 00:32:01.952 --rc geninfo_all_blocks=1 00:32:01.952 --rc geninfo_unexecuted_blocks=1 00:32:01.952 00:32:01.952 ' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:01.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.952 --rc genhtml_branch_coverage=1 00:32:01.952 --rc genhtml_function_coverage=1 00:32:01.952 --rc genhtml_legend=1 00:32:01.952 --rc geninfo_all_blocks=1 00:32:01.952 --rc geninfo_unexecuted_blocks=1 00:32:01.952 00:32:01.952 ' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:01.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.952 --rc genhtml_branch_coverage=1 00:32:01.952 --rc genhtml_function_coverage=1 00:32:01.952 --rc genhtml_legend=1 00:32:01.952 --rc geninfo_all_blocks=1 00:32:01.952 --rc geninfo_unexecuted_blocks=1 00:32:01.952 00:32:01.952 ' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:01.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.952 --rc genhtml_branch_coverage=1 00:32:01.952 --rc genhtml_function_coverage=1 00:32:01.952 --rc genhtml_legend=1 00:32:01.952 --rc geninfo_all_blocks=1 00:32:01.952 --rc geninfo_unexecuted_blocks=1 00:32:01.952 00:32:01.952 ' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.952 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:01.953 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.953 16:19:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.517 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:08.518 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:08.518 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:08.518 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:08.518 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # rdma_device_init 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@526 -- # allocate_nic_ips 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:08.518 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:08.518 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:08.518 altname enp217s0f0np0 00:32:08.518 altname ens818f0np0 00:32:08.518 inet 192.168.100.8/24 scope global mlx_0_0 00:32:08.518 valid_lft forever preferred_lft forever 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:08.518 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:08.519 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:08.519 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:08.519 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:08.519 16:19:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:08.519 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:08.519 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:08.519 altname enp217s0f1np1 00:32:08.519 altname ens818f1np1 00:32:08.519 inet 192.168.100.9/24 scope global mlx_0_1 00:32:08.519 valid_lft forever preferred_lft forever 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:32:08.519 192.168.100.9' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:32:08.519 192.168.100.9' 00:32:08.519 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # head -n 1 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:32:08.777 192.168.100.9' 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # tail -n +2 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # head -n 1 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:32:08.777 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3007303 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3007303 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3007303 ']' 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.778 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.778 [2024-12-15 16:19:37.184827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:08.778 [2024-12-15 16:19:37.184884] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.778 [2024-12-15 16:19:37.257878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.778 [2024-12-15 16:19:37.297488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.778 [2024-12-15 16:19:37.297530] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.778 [2024-12-15 16:19:37.297539] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.778 [2024-12-15 16:19:37.297547] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.778 [2024-12-15 16:19:37.297554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.778 [2024-12-15 16:19:37.297661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.778 [2024-12-15 16:19:37.297746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.778 [2024-12-15 16:19:37.297748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.036 [2024-12-15 16:19:37.482929] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa25c0/0x1aa6ab0) succeed. 00:32:09.036 [2024-12-15 16:19:37.493110] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa3b60/0x1ae8150) succeed. 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.036 Malloc0 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.036 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.294 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:09.295 [2024-12-15 16:19:37.626014] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:09.295 { 00:32:09.295 "params": { 00:32:09.295 "name": "Nvme$subsystem", 00:32:09.295 "trtype": "$TEST_TRANSPORT", 00:32:09.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.295 "adrfam": "ipv4", 00:32:09.295 "trsvcid": "$NVMF_PORT", 00:32:09.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.295 "hdgst": ${hdgst:-false}, 00:32:09.295 "ddgst": ${ddgst:-false} 00:32:09.295 }, 00:32:09.295 "method": "bdev_nvme_attach_controller" 00:32:09.295 } 00:32:09.295 EOF 00:32:09.295 )") 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:32:09.295 16:19:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:09.295 "params": { 00:32:09.295 "name": "Nvme1", 00:32:09.295 "trtype": "rdma", 00:32:09.295 "traddr": "192.168.100.8", 00:32:09.295 "adrfam": "ipv4", 00:32:09.295 "trsvcid": "4420", 00:32:09.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.295 "hdgst": false, 00:32:09.295 "ddgst": false 00:32:09.295 }, 00:32:09.295 "method": "bdev_nvme_attach_controller" 00:32:09.295 }' 00:32:09.295 [2024-12-15 16:19:37.675955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:09.295 [2024-12-15 16:19:37.676006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007336 ] 00:32:09.295 [2024-12-15 16:19:37.747753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.295 [2024-12-15 16:19:37.785982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.553 Running I/O for 1 seconds... 00:32:10.488 18306.00 IOPS, 71.51 MiB/s 00:32:10.488 Latency(us) 00:32:10.488 [2024-12-15T15:19:39.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:10.488 Verification LBA range: start 0x0 length 0x4000 00:32:10.488 Nvme1n1 : 1.01 18354.39 71.70 0.00 0.00 6936.20 2634.55 11114.91 00:32:10.488 [2024-12-15T15:19:39.058Z] =================================================================================================================== 00:32:10.488 [2024-12-15T15:19:39.058Z] Total : 18354.39 71.70 0.00 0.00 6936.20 2634.55 11114.91 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3007601 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:10.747 { 00:32:10.747 "params": { 00:32:10.747 "name": "Nvme$subsystem", 00:32:10.747 "trtype": "$TEST_TRANSPORT", 00:32:10.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.747 "adrfam": "ipv4", 00:32:10.747 "trsvcid": "$NVMF_PORT", 00:32:10.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.747 "hdgst": ${hdgst:-false}, 00:32:10.747 "ddgst": ${ddgst:-false} 00:32:10.747 }, 00:32:10.747 "method": "bdev_nvme_attach_controller" 00:32:10.747 } 00:32:10.747 EOF 00:32:10.747 )") 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:32:10.747 16:19:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:10.747 "params": { 00:32:10.747 "name": "Nvme1", 00:32:10.747 "trtype": "rdma", 00:32:10.747 "traddr": "192.168.100.8", 00:32:10.747 "adrfam": "ipv4", 00:32:10.747 "trsvcid": "4420", 00:32:10.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.747 "hdgst": false, 00:32:10.747 "ddgst": false 00:32:10.747 }, 00:32:10.747 "method": "bdev_nvme_attach_controller" 00:32:10.747 }' 00:32:10.747 [2024-12-15 16:19:39.213504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:10.747 [2024-12-15 16:19:39.213558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007601 ] 00:32:10.747 [2024-12-15 16:19:39.283757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.009 [2024-12-15 16:19:39.320140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.009 Running I/O for 15 seconds... 00:32:12.957 18304.00 IOPS, 71.50 MiB/s [2024-12-15T15:19:42.462Z] 18399.00 IOPS, 71.87 MiB/s [2024-12-15T15:19:42.462Z] 16:19:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3007303 00:32:13.892 16:19:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:14.720 16427.00 IOPS, 64.17 MiB/s [2024-12-15T15:19:43.290Z] [2024-12-15 16:19:43.209951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.209986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.720 [2024-12-15 16:19:43.210484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182700 00:32:14.720 [2024-12-15 16:19:43.210493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.210983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.210993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182700 00:32:14.721 [2024-12-15 16:19:43.211135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.721 [2024-12-15 16:19:43.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.722 [2024-12-15 16:19:43.211747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182700 00:32:14.722 [2024-12-15 16:19:43.211755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.211983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.211992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.212260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.212270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.723 [2024-12-15 16:19:43.221509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182700 00:32:14.723 [2024-12-15 16:19:43.221522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182700 00:32:14.724 [2024-12-15 16:19:43.221549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.221564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182700 00:32:14.724 [2024-12-15 16:19:43.221577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d4a6000 sqhd:7250 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.224170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.724 [2024-12-15 16:19:43.224193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.724 [2024-12-15 16:19:43.224206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:8 PRP1 0x0 PRP2 0x0 00:32:14.724 [2024-12-15 16:19:43.224220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.224270] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:32:14.724 [2024-12-15 16:19:43.224309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.724 [2024-12-15 16:19:43.224328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:2357740 sqhd:29a0 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.224342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.724 [2024-12-15 16:19:43.224355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:2357740 sqhd:29a0 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.224369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.724 [2024-12-15 16:19:43.224381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:2357740 sqhd:29a0 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.224394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.724 [2024-12-15 16:19:43.224407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:2357740 sqhd:29a0 p:0 m:0 dnr:0 00:32:14.724 [2024-12-15 16:19:43.242758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:14.724 [2024-12-15 16:19:43.242824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.724 [2024-12-15 16:19:43.242858] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:14.724 [2024-12-15 16:19:43.245863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.724 [2024-12-15 16:19:43.248553] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:14.724 [2024-12-15 16:19:43.248575] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:14.724 [2024-12-15 16:19:43.248584] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:32:15.916 12320.25 IOPS, 48.13 MiB/s [2024-12-15T15:19:44.486Z] [2024-12-15 16:19:44.252647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:15.916 [2024-12-15 16:19:44.252723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:15.916 [2024-12-15 16:19:44.253311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:15.916 [2024-12-15 16:19:44.253346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:15.916 [2024-12-15 16:19:44.253378] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:15.916 [2024-12-15 16:19:44.255545] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:15.916 [2024-12-15 16:19:44.256157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.916 [2024-12-15 16:19:44.268230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:15.916 [2024-12-15 16:19:44.270967] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:15.916 [2024-12-15 16:19:44.270987] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:15.916 [2024-12-15 16:19:44.270995] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:32:16.740 9856.20 IOPS, 38.50 MiB/s [2024-12-15T15:19:45.310Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3007303 Killed "${NVMF_APP[@]}" "$@" 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3008668 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3008668 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3008668 ']' 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:16.740 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 [2024-12-15 16:19:45.238716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:16.740 [2024-12-15 16:19:45.238770] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.740 [2024-12-15 16:19:45.275018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:16.740 [2024-12-15 16:19:45.275048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.740 [2024-12-15 16:19:45.275222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.740 [2024-12-15 16:19:45.275235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.740 [2024-12-15 16:19:45.275246] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:16.740 [2024-12-15 16:19:45.277917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.740 [2024-12-15 16:19:45.280917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.740 [2024-12-15 16:19:45.283602] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:16.740 [2024-12-15 16:19:45.283624] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:16.740 [2024-12-15 16:19:45.283632] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:32:16.997 [2024-12-15 16:19:45.310909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:16.997 [2024-12-15 16:19:45.349773] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.997 [2024-12-15 16:19:45.349814] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.997 [2024-12-15 16:19:45.349823] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.997 [2024-12-15 16:19:45.349832] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.997 [2024-12-15 16:19:45.349839] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.997 [2024-12-15 16:19:45.349882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:16.997 [2024-12-15 16:19:45.349974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.997 [2024-12-15 16:19:45.349975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.997 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:16.997 8213.50 IOPS, 32.08 MiB/s [2024-12-15T15:19:45.567Z] [2024-12-15 16:19:45.525468] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14ea5c0/0x14eeab0) succeed. 00:32:16.997 [2024-12-15 16:19:45.535874] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14ebb60/0x1530150) succeed. 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.254 Malloc0 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.254 [2024-12-15 16:19:45.666659] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.254 16:19:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3007601 00:32:17.818 [2024-12-15 16:19:46.287517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:17.818 [2024-12-15 16:19:46.287546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.818 [2024-12-15 16:19:46.287725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.818 [2024-12-15 16:19:46.287737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.818 [2024-12-15 16:19:46.287749] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:17.818 [2024-12-15 16:19:46.288550] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:17.818 [2024-12-15 16:19:46.290441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.818 [2024-12-15 16:19:46.301490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.818 [2024-12-15 16:19:46.341364] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:19.011 7490.71 IOPS, 29.26 MiB/s [2024-12-15T15:19:48.954Z] 8871.62 IOPS, 34.65 MiB/s [2024-12-15T15:19:49.889Z] 9944.22 IOPS, 38.84 MiB/s [2024-12-15T15:19:50.824Z] 10803.10 IOPS, 42.20 MiB/s [2024-12-15T15:19:51.760Z] 11504.27 IOPS, 44.94 MiB/s [2024-12-15T15:19:52.695Z] 12088.08 IOPS, 47.22 MiB/s [2024-12-15T15:19:53.630Z] 12585.08 IOPS, 49.16 MiB/s [2024-12-15T15:19:54.566Z] 13009.21 IOPS, 50.82 MiB/s [2024-12-15T15:19:54.566Z] 13377.33 IOPS, 52.26 MiB/s 00:32:25.996 Latency(us) 00:32:25.996 [2024-12-15T15:19:54.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:25.996 Verification LBA range: start 0x0 length 0x4000 00:32:25.996 Nvme1n1 : 15.00 13379.99 52.27 10576.32 0.00 5324.57 330.96 1073741.82 00:32:25.996 [2024-12-15T15:19:54.566Z] =================================================================================================================== 00:32:25.996 [2024-12-15T15:19:54.566Z] Total : 13379.99 52.27 10576.32 0.00 5324.57 330.96 1073741.82 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:26.254 rmmod nvme_rdma 00:32:26.254 rmmod nvme_fabrics 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3008668 ']' 00:32:26.254 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3008668 00:32:26.255 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3008668 ']' 00:32:26.255 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3008668 00:32:26.255 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:26.255 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:26.255 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3008668 00:32:26.513 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:26.513 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:26.513 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3008668' 00:32:26.513 killing process with pid 3008668 00:32:26.513 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3008668 00:32:26.513 16:19:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3008668 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:32:26.771 00:32:26.771 real 0m24.969s 00:32:26.771 user 1m2.287s 00:32:26.771 sys 0m6.474s 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 ************************************ 00:32:26.771 END TEST nvmf_bdevperf 00:32:26.771 ************************************ 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.771 ************************************ 00:32:26.771 START TEST nvmf_target_disconnect 00:32:26.771 ************************************ 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:26.771 * Looking for test storage... 00:32:26.771 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:26.771 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:27.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.029 --rc genhtml_branch_coverage=1 00:32:27.029 --rc genhtml_function_coverage=1 00:32:27.029 --rc genhtml_legend=1 00:32:27.029 --rc geninfo_all_blocks=1 00:32:27.029 --rc geninfo_unexecuted_blocks=1 00:32:27.029 00:32:27.029 ' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:27.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.029 --rc genhtml_branch_coverage=1 00:32:27.029 --rc genhtml_function_coverage=1 00:32:27.029 --rc genhtml_legend=1 00:32:27.029 --rc geninfo_all_blocks=1 00:32:27.029 --rc geninfo_unexecuted_blocks=1 00:32:27.029 00:32:27.029 ' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:27.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.029 --rc genhtml_branch_coverage=1 00:32:27.029 --rc genhtml_function_coverage=1 00:32:27.029 --rc genhtml_legend=1 00:32:27.029 --rc geninfo_all_blocks=1 00:32:27.029 --rc geninfo_unexecuted_blocks=1 00:32:27.029 00:32:27.029 ' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:27.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.029 --rc genhtml_branch_coverage=1 00:32:27.029 --rc genhtml_function_coverage=1 00:32:27.029 --rc genhtml_legend=1 00:32:27.029 --rc geninfo_all_blocks=1 00:32:27.029 --rc geninfo_unexecuted_blocks=1 00:32:27.029 00:32:27.029 ' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.029 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.030 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.030 16:19:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:33.585 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:33.585 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:33.585 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:33.585 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # rdma_device_init 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@526 -- # allocate_nic_ips 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:33.585 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:33.586 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:33.586 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:33.586 altname enp217s0f0np0 00:32:33.586 altname ens818f0np0 00:32:33.586 inet 192.168.100.8/24 scope global mlx_0_0 00:32:33.586 valid_lft forever preferred_lft forever 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:33.586 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:33.586 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:33.586 altname enp217s0f1np1 00:32:33.586 altname ens818f1np1 00:32:33.586 inet 192.168.100.9/24 scope global mlx_0_1 00:32:33.586 valid_lft forever preferred_lft forever 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:32:33.586 192.168.100.9' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:32:33.586 192.168.100.9' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # head -n 1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:32:33.586 192.168.100.9' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # tail -n +2 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # head -n 1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.586 16:20:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:33.586 ************************************ 00:32:33.586 START TEST nvmf_target_disconnect_tc1 00:32:33.586 ************************************ 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:33.586 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:33.587 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:33.587 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:33.587 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:33.587 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:32:33.587 16:20:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:33.587 [2024-12-15 16:20:02.140193] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:33.587 [2024-12-15 16:20:02.140309] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:33.587 [2024-12-15 16:20:02.140339] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:32:34.962 [2024-12-15 16:20:03.144282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:34.962 [2024-12-15 16:20:03.144354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:34.962 [2024-12-15 16:20:03.144390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:32:34.962 [2024-12-15 16:20:03.144462] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:34.962 [2024-12-15 16:20:03.144497] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:34.962 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:32:34.962 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:34.962 Initializing NVMe Controllers 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:34.962 00:32:34.962 real 0m1.133s 00:32:34.962 user 0m0.884s 00:32:34.962 sys 0m0.237s 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 ************************************ 00:32:34.962 END TEST nvmf_target_disconnect_tc1 00:32:34.962 ************************************ 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:34.962 ************************************ 00:32:34.962 START TEST nvmf_target_disconnect_tc2 00:32:34.962 ************************************ 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:32:34.962 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3013735 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3013735 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3013735 ']' 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:34.963 [2024-12-15 16:20:03.286344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:34.963 [2024-12-15 16:20:03.286390] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.963 [2024-12-15 16:20:03.356268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.963 [2024-12-15 16:20:03.396490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.963 [2024-12-15 16:20:03.396527] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.963 [2024-12-15 16:20:03.396537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.963 [2024-12-15 16:20:03.396546] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.963 [2024-12-15 16:20:03.396553] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.963 [2024-12-15 16:20:03.396787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:34.963 [2024-12-15 16:20:03.396668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:34.963 [2024-12-15 16:20:03.396691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:34.963 [2024-12-15 16:20:03.396788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.963 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 Malloc0 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 [2024-12-15 16:20:03.582871] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb6dd0/0x1cc34f0) succeed. 00:32:35.222 [2024-12-15 16:20:03.593798] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cb8410/0x1d04b90) succeed. 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 [2024-12-15 16:20:03.732503] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3013766 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:35.222 16:20:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:37.751 16:20:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3013735 00:32:37.751 16:20:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Write completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 Read completed with error (sct=0, sc=8) 00:32:38.687 starting I/O failed 00:32:38.687 [2024-12-15 16:20:06.926485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.253 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3013735 Killed "${NVMF_APP[@]}" "$@" 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3014555 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3014555 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3014555 ']' 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.253 16:20:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:39.253 [2024-12-15 16:20:07.808224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:39.253 [2024-12-15 16:20:07.808276] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.512 [2024-12-15 16:20:07.897743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Write completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 Read completed with error (sct=0, sc=8) 00:32:39.512 starting I/O failed 00:32:39.512 [2024-12-15 16:20:07.931546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.512 [2024-12-15 16:20:07.935674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.512 [2024-12-15 16:20:07.935709] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.512 [2024-12-15 16:20:07.935719] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.512 [2024-12-15 16:20:07.935728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.512 [2024-12-15 16:20:07.935735] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.512 [2024-12-15 16:20:07.935852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:39.512 [2024-12-15 16:20:07.935961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:39.512 [2024-12-15 16:20:07.936070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:39.512 [2024-12-15 16:20:07.936071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:40.079 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.079 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:40.079 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:40.079 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:40.079 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 Malloc0 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 [2024-12-15 16:20:08.725855] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x124fdd0/0x125c4f0) succeed. 00:32:40.338 [2024-12-15 16:20:08.736611] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1251410/0x129db90) succeed. 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 [2024-12-15 16:20:08.873792] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.338 16:20:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3013766 00:32:40.597 Write completed with error (sct=0, sc=8) 00:32:40.597 starting I/O failed 00:32:40.597 Write completed with error (sct=0, sc=8) 00:32:40.597 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Write completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 Read completed with error (sct=0, sc=8) 00:32:40.598 starting I/O failed 00:32:40.598 [2024-12-15 16:20:08.936543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 [2024-12-15 16:20:08.941930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:08.941988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:08.942010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:08.942020] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:08.942030] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:08.952039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:08.961848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:08.961890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:08.961908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:08.961918] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:08.961927] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:08.972083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:08.981854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:08.981898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:08.981916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:08.981926] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:08.981935] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:08.992160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:09.001944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:09.001988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:09.002006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:09.002016] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:09.002024] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:09.012251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:09.022040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:09.022089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:09.022108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:09.022117] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:09.022126] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:09.032254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:09.041954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:09.041993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:09.042011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:09.042021] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:09.042030] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:09.052341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.598 [2024-12-15 16:20:09.062175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.598 [2024-12-15 16:20:09.062214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.598 [2024-12-15 16:20:09.062232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.598 [2024-12-15 16:20:09.062241] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.598 [2024-12-15 16:20:09.062250] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.598 [2024-12-15 16:20:09.072432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.598 qpair failed and we were unable to recover it. 00:32:40.599 [2024-12-15 16:20:09.082260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.599 [2024-12-15 16:20:09.082304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.599 [2024-12-15 16:20:09.082322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.599 [2024-12-15 16:20:09.082335] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.599 [2024-12-15 16:20:09.082343] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.599 [2024-12-15 16:20:09.092444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.599 qpair failed and we were unable to recover it. 00:32:40.599 [2024-12-15 16:20:09.102270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.599 [2024-12-15 16:20:09.102315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.599 [2024-12-15 16:20:09.102333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.599 [2024-12-15 16:20:09.102342] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.599 [2024-12-15 16:20:09.102351] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.599 [2024-12-15 16:20:09.112312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.599 qpair failed and we were unable to recover it. 00:32:40.599 [2024-12-15 16:20:09.122304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.599 [2024-12-15 16:20:09.122345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.599 [2024-12-15 16:20:09.122363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.599 [2024-12-15 16:20:09.122372] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.599 [2024-12-15 16:20:09.122381] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.599 [2024-12-15 16:20:09.132466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.599 qpair failed and we were unable to recover it. 00:32:40.599 [2024-12-15 16:20:09.142327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.599 [2024-12-15 16:20:09.142370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.599 [2024-12-15 16:20:09.142389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.599 [2024-12-15 16:20:09.142398] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.599 [2024-12-15 16:20:09.142407] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.599 [2024-12-15 16:20:09.152526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.599 qpair failed and we were unable to recover it. 00:32:40.599 [2024-12-15 16:20:09.162471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.599 [2024-12-15 16:20:09.162515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.599 [2024-12-15 16:20:09.162534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.599 [2024-12-15 16:20:09.162544] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.599 [2024-12-15 16:20:09.162553] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.172555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.182404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.182451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.182470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.182479] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.182488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.192837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.202566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.202608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.202626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.202635] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.202644] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.212693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.222572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.222614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.222632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.222642] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.222651] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.233036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.242612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.242656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.242674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.242691] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.242701] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.252784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.262730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.262773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.262794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.262803] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.262812] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.273156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.282720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.282762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.282781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.858 [2024-12-15 16:20:09.282790] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.858 [2024-12-15 16:20:09.282799] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.858 [2024-12-15 16:20:09.293029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.858 qpair failed and we were unable to recover it. 00:32:40.858 [2024-12-15 16:20:09.302782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.858 [2024-12-15 16:20:09.302819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.858 [2024-12-15 16:20:09.302836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.302845] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.302854] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.312896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.322827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.322868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.322886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.322896] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.322905] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.333070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.342915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.342957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.342975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.342984] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.342996] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.353301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.363025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.363068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.363086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.363095] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.363104] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.373107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.383088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.383131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.383149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.383159] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.383169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.393549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.403053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.403095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.403122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.403132] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.403141] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.859 [2024-12-15 16:20:09.413526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.859 qpair failed and we were unable to recover it. 00:32:40.859 [2024-12-15 16:20:09.423187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.859 [2024-12-15 16:20:09.423229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.859 [2024-12-15 16:20:09.423248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.859 [2024-12-15 16:20:09.423258] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.859 [2024-12-15 16:20:09.423267] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.433405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.443396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.443446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.443467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.443477] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.443486] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.453525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.463273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.463309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.463327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.463337] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.463346] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.473396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.483352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.483394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.483412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.483421] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.483430] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.493701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.503413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.503458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.503475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.503485] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.503493] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.513758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.523420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.523463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.523481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.523494] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.523502] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.533713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.543520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.543563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.543588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.543598] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.543606] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.553772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.563690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.563732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.563758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.563768] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.563777] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.573617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.118 [2024-12-15 16:20:09.583558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.118 [2024-12-15 16:20:09.583597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.118 [2024-12-15 16:20:09.583615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.118 [2024-12-15 16:20:09.583624] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.118 [2024-12-15 16:20:09.583633] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.118 [2024-12-15 16:20:09.593998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.118 qpair failed and we were unable to recover it. 00:32:41.119 [2024-12-15 16:20:09.603784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.119 [2024-12-15 16:20:09.603828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.119 [2024-12-15 16:20:09.603846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.119 [2024-12-15 16:20:09.603855] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.119 [2024-12-15 16:20:09.603864] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.119 [2024-12-15 16:20:09.613824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.119 qpair failed and we were unable to recover it. 00:32:41.119 [2024-12-15 16:20:09.623825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.119 [2024-12-15 16:20:09.623868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.119 [2024-12-15 16:20:09.623886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.119 [2024-12-15 16:20:09.623895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.119 [2024-12-15 16:20:09.623905] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.119 [2024-12-15 16:20:09.634060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.119 qpair failed and we were unable to recover it. 00:32:41.119 [2024-12-15 16:20:09.643847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.119 [2024-12-15 16:20:09.643888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.119 [2024-12-15 16:20:09.643907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.119 [2024-12-15 16:20:09.643917] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.119 [2024-12-15 16:20:09.643926] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.119 [2024-12-15 16:20:09.654080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.119 qpair failed and we were unable to recover it. 00:32:41.119 [2024-12-15 16:20:09.663918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.119 [2024-12-15 16:20:09.663965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.119 [2024-12-15 16:20:09.663984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.119 [2024-12-15 16:20:09.663993] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.119 [2024-12-15 16:20:09.664002] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.119 [2024-12-15 16:20:09.674232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.119 qpair failed and we were unable to recover it. 00:32:41.119 [2024-12-15 16:20:09.684106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.119 [2024-12-15 16:20:09.684151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.119 [2024-12-15 16:20:09.684173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.119 [2024-12-15 16:20:09.684184] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.119 [2024-12-15 16:20:09.684195] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.694191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.703959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.703998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.704020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.704030] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.704039] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.714404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.724172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.724214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.724232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.724242] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.724250] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.734293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.744283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.744329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.744346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.744355] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.744364] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.754583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.764366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.764413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.764430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.764440] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.764449] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.774479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.784367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.784409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.784426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.784436] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.784445] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.794819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.804412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.804453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.804470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.804480] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.804488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.814720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.824465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.824509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.824526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.824536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.824544] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.834813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.844531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.844570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.844587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.844597] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.378 [2024-12-15 16:20:09.844606] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.378 [2024-12-15 16:20:09.854771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.378 qpair failed and we were unable to recover it. 00:32:41.378 [2024-12-15 16:20:09.864629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.378 [2024-12-15 16:20:09.864669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.378 [2024-12-15 16:20:09.864701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.378 [2024-12-15 16:20:09.864712] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.379 [2024-12-15 16:20:09.864721] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.379 [2024-12-15 16:20:09.875005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.379 qpair failed and we were unable to recover it. 00:32:41.379 [2024-12-15 16:20:09.884612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.379 [2024-12-15 16:20:09.884658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.379 [2024-12-15 16:20:09.884676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.379 [2024-12-15 16:20:09.884691] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.379 [2024-12-15 16:20:09.884700] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.379 [2024-12-15 16:20:09.894714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.379 qpair failed and we were unable to recover it. 00:32:41.379 [2024-12-15 16:20:09.904792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.379 [2024-12-15 16:20:09.904836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.379 [2024-12-15 16:20:09.904853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.379 [2024-12-15 16:20:09.904862] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.379 [2024-12-15 16:20:09.904871] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.379 [2024-12-15 16:20:09.915030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.379 qpair failed and we were unable to recover it. 00:32:41.379 [2024-12-15 16:20:09.924782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.379 [2024-12-15 16:20:09.924827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.379 [2024-12-15 16:20:09.924844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.379 [2024-12-15 16:20:09.924853] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.379 [2024-12-15 16:20:09.924862] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.379 [2024-12-15 16:20:09.935086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.379 qpair failed and we were unable to recover it. 00:32:41.379 [2024-12-15 16:20:09.944929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.379 [2024-12-15 16:20:09.944966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.379 [2024-12-15 16:20:09.944986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.379 [2024-12-15 16:20:09.944995] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.379 [2024-12-15 16:20:09.945004] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:09.955117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:09.964875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:09.964918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:09.964935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:09.964948] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:09.964956] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:09.975071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:09.985092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:09.985136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:09.985155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:09.985164] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:09.985173] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:09.995235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.005046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.005087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.005104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.005114] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:10.005122] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:10.015471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.025067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.025110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.025128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.025138] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:10.025147] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:10.035110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.045142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.045185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.045202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.045212] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:10.045221] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:10.055476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.065303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.065348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.065366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.065375] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:10.065384] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:10.075602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.085277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.085320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.085337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.085347] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.638 [2024-12-15 16:20:10.085356] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.638 [2024-12-15 16:20:10.095583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.638 qpair failed and we were unable to recover it. 00:32:41.638 [2024-12-15 16:20:10.105356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.638 [2024-12-15 16:20:10.105396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.638 [2024-12-15 16:20:10.105414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.638 [2024-12-15 16:20:10.105424] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.105433] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.639 [2024-12-15 16:20:10.115711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.639 qpair failed and we were unable to recover it. 00:32:41.639 [2024-12-15 16:20:10.125302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.639 [2024-12-15 16:20:10.125344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.639 [2024-12-15 16:20:10.125362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.639 [2024-12-15 16:20:10.125371] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.125380] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.639 [2024-12-15 16:20:10.135706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.639 qpair failed and we were unable to recover it. 00:32:41.639 [2024-12-15 16:20:10.145381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.639 [2024-12-15 16:20:10.145426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.639 [2024-12-15 16:20:10.145447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.639 [2024-12-15 16:20:10.145456] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.145465] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.639 [2024-12-15 16:20:10.156060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.639 qpair failed and we were unable to recover it. 00:32:41.639 [2024-12-15 16:20:10.165409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.639 [2024-12-15 16:20:10.165446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.639 [2024-12-15 16:20:10.165464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.639 [2024-12-15 16:20:10.165473] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.165482] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.639 [2024-12-15 16:20:10.175896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.639 qpair failed and we were unable to recover it. 00:32:41.639 [2024-12-15 16:20:10.185521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.639 [2024-12-15 16:20:10.185562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.639 [2024-12-15 16:20:10.185580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.639 [2024-12-15 16:20:10.185589] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.185598] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.639 [2024-12-15 16:20:10.195935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.639 qpair failed and we were unable to recover it. 00:32:41.639 [2024-12-15 16:20:10.205540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.639 [2024-12-15 16:20:10.205582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.639 [2024-12-15 16:20:10.205605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.639 [2024-12-15 16:20:10.205618] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.639 [2024-12-15 16:20:10.205632] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.215843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.225623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.225670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.225693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.225703] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.225712] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.236118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.245680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.245727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.245744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.245753] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.245762] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.255941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.265745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.265785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.265803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.265813] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.265822] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.276232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.285813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.285857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.285875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.285884] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.285893] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.296184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.306056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.306100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.306118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.306127] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.306135] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.316359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.325880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.325927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.325944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.325954] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.325962] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.336377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.346001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.346044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.346062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.346071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.346080] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.356339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.366036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.366079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.366097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.366106] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.366115] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.376342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.386155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.901 [2024-12-15 16:20:10.386201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.901 [2024-12-15 16:20:10.386218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.901 [2024-12-15 16:20:10.386228] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.901 [2024-12-15 16:20:10.386236] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.901 [2024-12-15 16:20:10.396579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.901 qpair failed and we were unable to recover it. 00:32:41.901 [2024-12-15 16:20:10.406200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.902 [2024-12-15 16:20:10.406242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.902 [2024-12-15 16:20:10.406259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.902 [2024-12-15 16:20:10.406269] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.902 [2024-12-15 16:20:10.406281] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.902 [2024-12-15 16:20:10.416603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.902 qpair failed and we were unable to recover it. 00:32:41.902 [2024-12-15 16:20:10.426281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.902 [2024-12-15 16:20:10.426322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.902 [2024-12-15 16:20:10.426340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.902 [2024-12-15 16:20:10.426350] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.902 [2024-12-15 16:20:10.426359] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.902 [2024-12-15 16:20:10.436634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.902 qpair failed and we were unable to recover it. 00:32:41.902 [2024-12-15 16:20:10.446320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.902 [2024-12-15 16:20:10.446363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.902 [2024-12-15 16:20:10.446381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.902 [2024-12-15 16:20:10.446391] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.902 [2024-12-15 16:20:10.446399] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.902 [2024-12-15 16:20:10.456638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.902 qpair failed and we were unable to recover it. 00:32:41.902 [2024-12-15 16:20:10.466391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.902 [2024-12-15 16:20:10.466432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.902 [2024-12-15 16:20:10.466451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.902 [2024-12-15 16:20:10.466460] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.902 [2024-12-15 16:20:10.466469] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.160 [2024-12-15 16:20:10.476641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.160 qpair failed and we were unable to recover it. 00:32:42.160 [2024-12-15 16:20:10.486476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.160 [2024-12-15 16:20:10.486514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.160 [2024-12-15 16:20:10.486532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.160 [2024-12-15 16:20:10.486541] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.160 [2024-12-15 16:20:10.486550] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.160 [2024-12-15 16:20:10.496486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.160 qpair failed and we were unable to recover it. 00:32:42.160 [2024-12-15 16:20:10.506431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.160 [2024-12-15 16:20:10.506473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.160 [2024-12-15 16:20:10.506490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.160 [2024-12-15 16:20:10.506500] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.160 [2024-12-15 16:20:10.506509] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.160 [2024-12-15 16:20:10.516781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.160 qpair failed and we were unable to recover it. 00:32:42.160 [2024-12-15 16:20:10.526486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.160 [2024-12-15 16:20:10.526529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.160 [2024-12-15 16:20:10.526546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.160 [2024-12-15 16:20:10.526556] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.160 [2024-12-15 16:20:10.526565] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.160 [2024-12-15 16:20:10.536772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.160 qpair failed and we were unable to recover it. 00:32:42.160 [2024-12-15 16:20:10.546559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.160 [2024-12-15 16:20:10.546602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.546620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.546630] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.546639] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.556863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.566619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.566661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.566679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.566693] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.566703] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.576901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.586683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.586732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.586753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.586762] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.586771] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.596921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.606676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.606727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.606745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.606755] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.606764] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.616983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.626811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.626853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.626871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.626880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.626889] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.637112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.646741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.646781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.646806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.646816] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.646825] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.657155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.666798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.666837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.666854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.666863] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.666872] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.677137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.687029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.687071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.687089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.687098] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.687107] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.697307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.707084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.707126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.707144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.707155] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.707164] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.161 [2024-12-15 16:20:10.717241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.161 qpair failed and we were unable to recover it. 00:32:42.161 [2024-12-15 16:20:10.727132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.161 [2024-12-15 16:20:10.727171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.161 [2024-12-15 16:20:10.727190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.161 [2024-12-15 16:20:10.727199] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.161 [2024-12-15 16:20:10.727208] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.420 [2024-12-15 16:20:10.737343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.420 qpair failed and we were unable to recover it. 00:32:42.420 [2024-12-15 16:20:10.747167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.420 [2024-12-15 16:20:10.747206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.420 [2024-12-15 16:20:10.747224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.420 [2024-12-15 16:20:10.747234] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.420 [2024-12-15 16:20:10.747242] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.420 [2024-12-15 16:20:10.757535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.420 qpair failed and we were unable to recover it. 00:32:42.420 [2024-12-15 16:20:10.767235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.767278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.767299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.767308] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.767317] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.777557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.787312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.787357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.787375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.787384] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.787393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.797970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.807412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.807455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.807475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.807485] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.807495] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.817652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.827334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.827376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.827393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.827403] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.827411] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.837652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.847408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.847450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.847468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.847478] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.847491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.857698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.867568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.867614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.867631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.867640] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.867649] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.877717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.887600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.887640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.887658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.887667] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.887676] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.897659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.907535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.907574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.907591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.907601] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.907609] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.917829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.927598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.927639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.927656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.927665] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.927674] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.937856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.947696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.947741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.947759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.947768] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.947777] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.957899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.967864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.967901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.967918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.967927] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.967936] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.421 [2024-12-15 16:20:10.977954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.421 qpair failed and we were unable to recover it. 00:32:42.421 [2024-12-15 16:20:10.987850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.421 [2024-12-15 16:20:10.987893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.421 [2024-12-15 16:20:10.987913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.421 [2024-12-15 16:20:10.987923] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.421 [2024-12-15 16:20:10.987932] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:10.998142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.007889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.007931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.007949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.007958] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.007967] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.018236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.028068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.028106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.028123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.028136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.028144] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.038382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.048059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.048101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.048119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.048128] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.048137] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.058345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.068153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.068196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.068213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.068223] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.068232] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.078561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.088125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.088169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.088187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.088196] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.088205] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.098338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.108284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.108331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.108348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.108358] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.108366] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.118563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.128174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.128219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.128236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.128246] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.128254] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.138518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.148380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.148422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.148439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.148449] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.148458] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.158514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.168412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.168454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.168471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.168481] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.168489] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.178605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.188472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.188518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.188536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.188545] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.188554] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.680 [2024-12-15 16:20:11.198539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.680 qpair failed and we were unable to recover it. 00:32:42.680 [2024-12-15 16:20:11.208518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.680 [2024-12-15 16:20:11.208557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.680 [2024-12-15 16:20:11.208579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.680 [2024-12-15 16:20:11.208588] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.680 [2024-12-15 16:20:11.208597] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.681 [2024-12-15 16:20:11.218633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.681 qpair failed and we were unable to recover it. 00:32:42.681 [2024-12-15 16:20:11.228552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.681 [2024-12-15 16:20:11.228587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.681 [2024-12-15 16:20:11.228604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.681 [2024-12-15 16:20:11.228614] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.681 [2024-12-15 16:20:11.228622] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.681 [2024-12-15 16:20:11.238768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.681 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.248582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.248626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.248647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.248659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.248670] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.258992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.268572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.268618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.268636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.268645] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.268654] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.279086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.288768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.288812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.288830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.288839] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.288852] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.298949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.308701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.308741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.308759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.308769] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.308779] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.319043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.328844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.328889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.328907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.328916] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.328925] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.338967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.348886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.348930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.348947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.348957] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.348965] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.359216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.369031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.369076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.369094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.369104] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.369112] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.379157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.389093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.389138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.389156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.389165] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.389174] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.399335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.409239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.409282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.409299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.409309] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.409317] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.419487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.429280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.940 [2024-12-15 16:20:11.429321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.940 [2024-12-15 16:20:11.429338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.940 [2024-12-15 16:20:11.429348] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.940 [2024-12-15 16:20:11.429356] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.940 [2024-12-15 16:20:11.439859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.940 qpair failed and we were unable to recover it. 00:32:42.940 [2024-12-15 16:20:11.449316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.941 [2024-12-15 16:20:11.449357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.941 [2024-12-15 16:20:11.449375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.941 [2024-12-15 16:20:11.449384] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.941 [2024-12-15 16:20:11.449393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.941 [2024-12-15 16:20:11.459592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.941 qpair failed and we were unable to recover it. 00:32:42.941 [2024-12-15 16:20:11.469217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.941 [2024-12-15 16:20:11.469257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.941 [2024-12-15 16:20:11.469279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.941 [2024-12-15 16:20:11.469292] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.941 [2024-12-15 16:20:11.469301] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.941 [2024-12-15 16:20:11.479659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.941 qpair failed and we were unable to recover it. 00:32:42.941 [2024-12-15 16:20:11.489463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.941 [2024-12-15 16:20:11.489503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.941 [2024-12-15 16:20:11.489521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.941 [2024-12-15 16:20:11.489530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.941 [2024-12-15 16:20:11.489539] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:42.941 [2024-12-15 16:20:11.499485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.941 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.509487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.509528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.509548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.509557] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.509566] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.519761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.529540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.529581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.529598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.529608] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.529616] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.539662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.549637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.549678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.549701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.549710] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.549719] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.559929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.569616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.569659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.569677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.569692] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.569701] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.579757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.589676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.589724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.589742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.589751] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.589760] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.599872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.609672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.609722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.609740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.609749] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.609758] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.619986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.629775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.629819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.629837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.629846] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.629855] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.640028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.649820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.649863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.649884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.649894] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.649902] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.660121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.669890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.669939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.669957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.669967] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.669976] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.680203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.689939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.689976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.689994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.200 [2024-12-15 16:20:11.690004] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.200 [2024-12-15 16:20:11.690012] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.200 [2024-12-15 16:20:11.700090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.200 qpair failed and we were unable to recover it. 00:32:43.200 [2024-12-15 16:20:11.710045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.200 [2024-12-15 16:20:11.710090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.200 [2024-12-15 16:20:11.710111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.201 [2024-12-15 16:20:11.710120] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.201 [2024-12-15 16:20:11.710129] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.201 [2024-12-15 16:20:11.720296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.201 qpair failed and we were unable to recover it. 00:32:43.201 [2024-12-15 16:20:11.730098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.201 [2024-12-15 16:20:11.730140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.201 [2024-12-15 16:20:11.730158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.201 [2024-12-15 16:20:11.730167] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.201 [2024-12-15 16:20:11.730176] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.201 [2024-12-15 16:20:11.740222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.201 qpair failed and we were unable to recover it. 00:32:43.201 [2024-12-15 16:20:11.750108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.201 [2024-12-15 16:20:11.750152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.201 [2024-12-15 16:20:11.750170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.201 [2024-12-15 16:20:11.750180] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.201 [2024-12-15 16:20:11.750189] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.201 [2024-12-15 16:20:11.760289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.201 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.770071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.770111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.770131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.770141] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.770151] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.780447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.790311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.790355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.790373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.790384] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.790393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.800618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.810099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.810142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.810160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.810169] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.810178] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.820561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.830296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.830336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.830354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.830363] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.830372] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.840541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.850453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.850496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.850513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.850522] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.850531] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.860611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.870436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.870477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.870494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.870504] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.870513] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.880779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.890465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.890510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.890528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.890538] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.890547] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.900728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.910625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.910671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.910693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.910706] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.910715] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.460 [2024-12-15 16:20:11.920602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.460 qpair failed and we were unable to recover it. 00:32:43.460 [2024-12-15 16:20:11.930476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.460 [2024-12-15 16:20:11.930513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.460 [2024-12-15 16:20:11.930531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.460 [2024-12-15 16:20:11.930540] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.460 [2024-12-15 16:20:11.930549] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.461 [2024-12-15 16:20:11.940912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.461 qpair failed and we were unable to recover it. 00:32:43.461 [2024-12-15 16:20:11.950643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.461 [2024-12-15 16:20:11.950683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.461 [2024-12-15 16:20:11.950712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.461 [2024-12-15 16:20:11.950721] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.461 [2024-12-15 16:20:11.950730] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.461 [2024-12-15 16:20:11.961094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.461 qpair failed and we were unable to recover it. 00:32:43.461 [2024-12-15 16:20:11.970777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.461 [2024-12-15 16:20:11.970819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.461 [2024-12-15 16:20:11.970837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.461 [2024-12-15 16:20:11.970846] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.461 [2024-12-15 16:20:11.970854] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.461 [2024-12-15 16:20:11.980960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.461 qpair failed and we were unable to recover it. 00:32:43.461 [2024-12-15 16:20:11.990728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.461 [2024-12-15 16:20:11.990772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.461 [2024-12-15 16:20:11.990789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.461 [2024-12-15 16:20:11.990799] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.461 [2024-12-15 16:20:11.990807] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.461 [2024-12-15 16:20:12.001167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.461 qpair failed and we were unable to recover it. 00:32:43.461 [2024-12-15 16:20:12.010826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.461 [2024-12-15 16:20:12.010865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.461 [2024-12-15 16:20:12.010883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.461 [2024-12-15 16:20:12.010892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.461 [2024-12-15 16:20:12.010901] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.461 [2024-12-15 16:20:12.021131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.461 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.030888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.030925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.030945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.720 [2024-12-15 16:20:12.030955] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.720 [2024-12-15 16:20:12.030964] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.720 [2024-12-15 16:20:12.041077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.720 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.050864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.050907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.050925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.720 [2024-12-15 16:20:12.050934] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.720 [2024-12-15 16:20:12.050943] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.720 [2024-12-15 16:20:12.061283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.720 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.071038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.071084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.071103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.720 [2024-12-15 16:20:12.071112] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.720 [2024-12-15 16:20:12.071121] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.720 [2024-12-15 16:20:12.081656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.720 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.091092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.091131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.091152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.720 [2024-12-15 16:20:12.091162] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.720 [2024-12-15 16:20:12.091170] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.720 [2024-12-15 16:20:12.101299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.720 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.111065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.111108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.111125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.720 [2024-12-15 16:20:12.111135] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.720 [2024-12-15 16:20:12.111144] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.720 [2024-12-15 16:20:12.121474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.720 qpair failed and we were unable to recover it. 00:32:43.720 [2024-12-15 16:20:12.131200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.720 [2024-12-15 16:20:12.131241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.720 [2024-12-15 16:20:12.131258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.131268] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.131276] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.141568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.151307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.151351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.151368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.151377] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.151386] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.161627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.171377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.171415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.171432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.171442] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.171451] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.181648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.191470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.191513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.191530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.191540] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.191549] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.201762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.211465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.211511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.211528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.211538] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.211548] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.221825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.231409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.231450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.231468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.231477] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.231487] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.241863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.251592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.251632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.251650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.251659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.251668] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.261862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.721 [2024-12-15 16:20:12.271481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.721 [2024-12-15 16:20:12.271526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.721 [2024-12-15 16:20:12.271544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.721 [2024-12-15 16:20:12.271553] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.721 [2024-12-15 16:20:12.271562] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.721 [2024-12-15 16:20:12.282024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.721 qpair failed and we were unable to recover it. 00:32:43.980 [2024-12-15 16:20:12.291754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.980 [2024-12-15 16:20:12.291797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.980 [2024-12-15 16:20:12.291817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.980 [2024-12-15 16:20:12.291827] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.980 [2024-12-15 16:20:12.291836] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.980 [2024-12-15 16:20:12.302042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.980 qpair failed and we were unable to recover it. 00:32:43.980 [2024-12-15 16:20:12.311701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.980 [2024-12-15 16:20:12.311748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.980 [2024-12-15 16:20:12.311766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.980 [2024-12-15 16:20:12.311776] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.980 [2024-12-15 16:20:12.311784] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.980 [2024-12-15 16:20:12.321998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.980 qpair failed and we were unable to recover it. 00:32:43.980 [2024-12-15 16:20:12.331809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.980 [2024-12-15 16:20:12.331852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.980 [2024-12-15 16:20:12.331870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.980 [2024-12-15 16:20:12.331879] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.980 [2024-12-15 16:20:12.331888] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.980 [2024-12-15 16:20:12.342191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.980 qpair failed and we were unable to recover it. 00:32:43.980 [2024-12-15 16:20:12.351806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.980 [2024-12-15 16:20:12.351853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.980 [2024-12-15 16:20:12.351871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.980 [2024-12-15 16:20:12.351880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.980 [2024-12-15 16:20:12.351893] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.980 [2024-12-15 16:20:12.362359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.980 qpair failed and we were unable to recover it. 00:32:43.980 [2024-12-15 16:20:12.371974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.980 [2024-12-15 16:20:12.372015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.980 [2024-12-15 16:20:12.372033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.980 [2024-12-15 16:20:12.372042] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.980 [2024-12-15 16:20:12.372051] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.980 [2024-12-15 16:20:12.382233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.391901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.391944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.391962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.391972] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.391981] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.402407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.412063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.412109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.412127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.412136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.412145] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.422312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.432149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.432189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.432207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.432217] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.432226] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.442527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.452258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.452301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.452319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.452329] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.452337] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.462296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.472265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.472316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.472334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.472344] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.472353] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.482642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.492284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.492323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.492340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.492350] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.492359] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.502656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.512379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.512422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.512440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.512449] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.512458] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.522763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:43.981 [2024-12-15 16:20:12.532419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:43.981 [2024-12-15 16:20:12.532463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:43.981 [2024-12-15 16:20:12.532483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:43.981 [2024-12-15 16:20:12.532493] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:43.981 [2024-12-15 16:20:12.532502] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:43.981 [2024-12-15 16:20:12.542803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:43.981 qpair failed and we were unable to recover it. 00:32:44.240 [2024-12-15 16:20:12.552449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.240 [2024-12-15 16:20:12.552491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.240 [2024-12-15 16:20:12.552511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.240 [2024-12-15 16:20:12.552521] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.240 [2024-12-15 16:20:12.552530] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.240 [2024-12-15 16:20:12.562811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-12-15 16:20:12.572538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.240 [2024-12-15 16:20:12.572578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.240 [2024-12-15 16:20:12.572596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.240 [2024-12-15 16:20:12.572605] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.240 [2024-12-15 16:20:12.572614] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.240 [2024-12-15 16:20:12.582771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.240 qpair failed and we were unable to recover it. 00:32:44.240 [2024-12-15 16:20:12.592454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.240 [2024-12-15 16:20:12.592501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.240 [2024-12-15 16:20:12.592519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.592529] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.592538] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.602807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.612675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.612720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.612738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.612747] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.612756] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.623072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.632656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.632704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.632722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.632731] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.632740] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.643218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.652791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.652833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.652851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.652860] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.652868] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.663250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.672877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.672921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.672938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.672947] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.672956] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.683246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.692909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.692952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.692970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.692980] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.692988] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.703094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.713032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.713084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.713105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.713114] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.713123] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.723671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.733008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.733046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.733063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.733072] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.733081] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.743169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.753156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.753195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.753212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.753222] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.753231] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.763462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.773144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.773187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.773204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.773214] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.773222] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.783550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.241 [2024-12-15 16:20:12.793242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.241 [2024-12-15 16:20:12.793280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.241 [2024-12-15 16:20:12.793298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.241 [2024-12-15 16:20:12.793307] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.241 [2024-12-15 16:20:12.793319] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.241 [2024-12-15 16:20:12.803438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.241 qpair failed and we were unable to recover it. 00:32:44.500 [2024-12-15 16:20:12.813326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.500 [2024-12-15 16:20:12.813371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.500 [2024-12-15 16:20:12.813391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.500 [2024-12-15 16:20:12.813401] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.500 [2024-12-15 16:20:12.813409] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.500 [2024-12-15 16:20:12.823568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.500 qpair failed and we were unable to recover it. 00:32:44.500 [2024-12-15 16:20:12.833291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.500 [2024-12-15 16:20:12.833328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.500 [2024-12-15 16:20:12.833345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.500 [2024-12-15 16:20:12.833355] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.833364] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.843460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.853393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.853437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.853454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.853463] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.853472] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.863616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.873383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.873427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.873444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.873453] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.873462] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.883719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.893391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.893433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.893451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.893460] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.893469] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.903548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.913369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.913417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.913434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.913444] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.913453] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.923729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.933498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.933539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.933556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.933565] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.933573] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.943822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.953877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.953925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.953942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.953952] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.953960] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.963991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.973712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.973754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.973771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.973784] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.973793] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:12.984105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:12.993751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:12.993795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:12.993813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:12.993823] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:12.993831] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:13.004142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:13.013854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:13.013896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:13.013913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:13.013923] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:13.013932] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:13.024107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:13.033854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:13.033898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:13.033916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:13.033925] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:13.033934] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:13.044254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.501 [2024-12-15 16:20:13.054068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.501 [2024-12-15 16:20:13.054112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.501 [2024-12-15 16:20:13.054129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.501 [2024-12-15 16:20:13.054139] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.501 [2024-12-15 16:20:13.054147] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.501 [2024-12-15 16:20:13.064155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.501 qpair failed and we were unable to recover it. 00:32:44.760 [2024-12-15 16:20:13.074070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.760 [2024-12-15 16:20:13.074107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.760 [2024-12-15 16:20:13.074126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.760 [2024-12-15 16:20:13.074136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.760 [2024-12-15 16:20:13.074145] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.760 [2024-12-15 16:20:13.084287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.760 qpair failed and we were unable to recover it. 00:32:44.760 [2024-12-15 16:20:13.094122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.760 [2024-12-15 16:20:13.094164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.760 [2024-12-15 16:20:13.094183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.760 [2024-12-15 16:20:13.094192] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.760 [2024-12-15 16:20:13.094201] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.760 [2024-12-15 16:20:13.104364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.760 qpair failed and we were unable to recover it. 00:32:44.760 [2024-12-15 16:20:13.114135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.760 [2024-12-15 16:20:13.114179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.760 [2024-12-15 16:20:13.114197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.760 [2024-12-15 16:20:13.114206] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.760 [2024-12-15 16:20:13.114215] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.760 [2024-12-15 16:20:13.124520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.760 qpair failed and we were unable to recover it. 00:32:44.760 [2024-12-15 16:20:13.134237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.760 [2024-12-15 16:20:13.134279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.134297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.134306] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.134315] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.144465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.154329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.154370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.154392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.154401] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.154409] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.164552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.174414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.174455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.174473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.174482] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.174491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.184362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.194464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.194506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.194524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.194533] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.194542] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.204728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.214508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.214547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.214565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.214575] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.214583] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.224752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.234532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.234573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.234591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.234601] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.234613] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.244712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.254466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.254507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.254525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.254534] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.254543] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.264735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.274590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.274635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.274652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.274661] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.274670] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.284861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.294656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.294705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.294722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.294732] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.294741] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.304918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:44.761 [2024-12-15 16:20:13.314737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:44.761 [2024-12-15 16:20:13.314780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:44.761 [2024-12-15 16:20:13.314798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:44.761 [2024-12-15 16:20:13.314807] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:44.761 [2024-12-15 16:20:13.314816] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:44.761 [2024-12-15 16:20:13.325015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:44.761 qpair failed and we were unable to recover it. 00:32:45.020 [2024-12-15 16:20:13.334781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.020 [2024-12-15 16:20:13.334827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.020 [2024-12-15 16:20:13.334847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.020 [2024-12-15 16:20:13.334856] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.020 [2024-12-15 16:20:13.334865] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.020 [2024-12-15 16:20:13.344989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.020 qpair failed and we were unable to recover it. 00:32:45.020 [2024-12-15 16:20:13.354834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.020 [2024-12-15 16:20:13.354877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.020 [2024-12-15 16:20:13.354895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.020 [2024-12-15 16:20:13.354904] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.020 [2024-12-15 16:20:13.354913] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.020 [2024-12-15 16:20:13.365450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.020 qpair failed and we were unable to recover it. 00:32:45.020 [2024-12-15 16:20:13.374984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.020 [2024-12-15 16:20:13.375023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.020 [2024-12-15 16:20:13.375040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.020 [2024-12-15 16:20:13.375050] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.020 [2024-12-15 16:20:13.375058] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.385147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.394919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.394958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.394976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.394986] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.394994] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.405271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.415032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.415073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.415092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.415104] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.415113] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.425270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.435118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.435158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.435176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.435186] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.435195] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.445377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.455222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.455264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.455282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.455291] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.455301] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.465282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.475238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.475275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.475293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.475302] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.475311] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.485584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.495181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.495224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.495242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.495251] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.495260] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.505449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.515335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.515377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.515394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.515403] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.515412] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.525667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.535315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.535356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.535373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.535383] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.535392] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.545469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.555482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.555522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.555540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.555549] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.555558] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.565863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.021 [2024-12-15 16:20:13.575508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.021 [2024-12-15 16:20:13.575552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.021 [2024-12-15 16:20:13.575570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.021 [2024-12-15 16:20:13.575580] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.021 [2024-12-15 16:20:13.575588] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.021 [2024-12-15 16:20:13.585592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.021 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.595569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.595611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.595635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.595644] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.595653] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.605908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.615520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.615557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.615575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.615585] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.615593] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.625881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.635539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.635578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.635596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.635605] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.635613] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.645966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.655725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.655768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.655785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.655794] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.655803] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.665838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.675735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.675781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.675799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.675808] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.675817] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.686232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.695729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.695777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.695794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.695804] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.695813] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.705969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.715959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.715997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.716014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.716024] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.716033] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.726265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.735940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.735982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.735999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.736009] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.736017] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.746239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.756001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.756051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.756068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.756078] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.756087] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.766399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.776019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.776070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.776088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.281 [2024-12-15 16:20:13.776098] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.281 [2024-12-15 16:20:13.776107] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.281 [2024-12-15 16:20:13.786239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.281 qpair failed and we were unable to recover it. 00:32:45.281 [2024-12-15 16:20:13.796169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.281 [2024-12-15 16:20:13.796209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.281 [2024-12-15 16:20:13.796227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.282 [2024-12-15 16:20:13.796237] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.282 [2024-12-15 16:20:13.796245] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.282 [2024-12-15 16:20:13.806437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.282 qpair failed and we were unable to recover it. 00:32:45.282 [2024-12-15 16:20:13.816137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.282 [2024-12-15 16:20:13.816178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.282 [2024-12-15 16:20:13.816196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.282 [2024-12-15 16:20:13.816205] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.282 [2024-12-15 16:20:13.816214] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.282 [2024-12-15 16:20:13.826388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.282 qpair failed and we were unable to recover it. 00:32:45.282 [2024-12-15 16:20:13.836282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.282 [2024-12-15 16:20:13.836331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.282 [2024-12-15 16:20:13.836349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.282 [2024-12-15 16:20:13.836359] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.282 [2024-12-15 16:20:13.836367] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.282 [2024-12-15 16:20:13.846396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.282 qpair failed and we were unable to recover it. 00:32:45.549 [2024-12-15 16:20:13.856297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.549 [2024-12-15 16:20:13.856342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.549 [2024-12-15 16:20:13.856361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.549 [2024-12-15 16:20:13.856374] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.549 [2024-12-15 16:20:13.856383] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.549 [2024-12-15 16:20:13.866635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.549 qpair failed and we were unable to recover it. 00:32:45.549 [2024-12-15 16:20:13.876359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.549 [2024-12-15 16:20:13.876403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.549 [2024-12-15 16:20:13.876422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.549 [2024-12-15 16:20:13.876432] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.549 [2024-12-15 16:20:13.876441] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.549 [2024-12-15 16:20:13.886601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.549 qpair failed and we were unable to recover it. 00:32:45.549 [2024-12-15 16:20:13.896371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.549 [2024-12-15 16:20:13.896414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.550 [2024-12-15 16:20:13.896433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.550 [2024-12-15 16:20:13.896442] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.550 [2024-12-15 16:20:13.896451] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.550 [2024-12-15 16:20:13.906724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.550 qpair failed and we were unable to recover it. 00:32:45.550 [2024-12-15 16:20:13.916347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.550 [2024-12-15 16:20:13.916387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.550 [2024-12-15 16:20:13.916405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.550 [2024-12-15 16:20:13.916414] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.550 [2024-12-15 16:20:13.916423] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.550 [2024-12-15 16:20:13.926792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.550 qpair failed and we were unable to recover it. 00:32:45.550 [2024-12-15 16:20:13.936532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.550 [2024-12-15 16:20:13.936577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.550 [2024-12-15 16:20:13.936595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.550 [2024-12-15 16:20:13.936604] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.550 [2024-12-15 16:20:13.936613] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.550 [2024-12-15 16:20:13.946830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.550 qpair failed and we were unable to recover it. 00:32:45.550 [2024-12-15 16:20:13.956584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.550 [2024-12-15 16:20:13.956628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.550 [2024-12-15 16:20:13.956645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.550 [2024-12-15 16:20:13.956655] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.550 [2024-12-15 16:20:13.956664] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.550 [2024-12-15 16:20:13.966951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.550 qpair failed and we were unable to recover it. 00:32:45.550 [2024-12-15 16:20:13.976727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:45.550 [2024-12-15 16:20:13.976770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:45.550 [2024-12-15 16:20:13.976787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:45.550 [2024-12-15 16:20:13.976797] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.550 [2024-12-15 16:20:13.976805] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:45.550 [2024-12-15 16:20:13.987122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.550 qpair failed and we were unable to recover it. 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Read completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.523 Write completed with error (sct=0, sc=8) 00:32:46.523 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Read completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 Write completed with error (sct=0, sc=8) 00:32:46.524 starting I/O failed 00:32:46.524 [2024-12-15 16:20:14.992711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:46.524 [2024-12-15 16:20:14.999593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.524 [2024-12-15 16:20:14.999638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.524 [2024-12-15 16:20:14.999656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.524 [2024-12-15 16:20:14.999666] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.524 [2024-12-15 16:20:14.999675] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1140 00:32:46.524 [2024-12-15 16:20:15.010842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:46.524 qpair failed and we were unable to recover it. 00:32:46.524 [2024-12-15 16:20:15.019949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:46.524 [2024-12-15 16:20:15.019993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:46.524 [2024-12-15 16:20:15.020011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:46.524 [2024-12-15 16:20:15.020021] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:46.524 [2024-12-15 16:20:15.020029] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1140 00:32:46.524 [2024-12-15 16:20:15.030266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:46.524 qpair failed and we were unable to recover it. 00:32:46.524 [2024-12-15 16:20:15.030398] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:46.524 A controller has encountered a failure and is being reset. 00:32:46.524 [2024-12-15 16:20:15.030518] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:46.524 [2024-12-15 16:20:15.064133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:46.524 Controller properly reset. 00:32:46.782 Initializing NVMe Controllers 00:32:46.782 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.782 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:46.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:46.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:46.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:46.782 Initialization complete. Launching workers. 00:32:46.782 Starting thread on core 1 00:32:46.782 Starting thread on core 2 00:32:46.782 Starting thread on core 3 00:32:46.782 Starting thread on core 0 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:46.782 00:32:46.782 real 0m11.907s 00:32:46.782 user 0m24.794s 00:32:46.782 sys 0m3.000s 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:46.782 ************************************ 00:32:46.782 END TEST nvmf_target_disconnect_tc2 00:32:46.782 ************************************ 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:46.782 ************************************ 00:32:46.782 START TEST nvmf_target_disconnect_tc3 00:32:46.782 ************************************ 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3015669 00:32:46.782 16:20:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:32:48.681 16:20:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3014555 00:32:48.681 16:20:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Write completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Write completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Write completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.056 Read completed with error (sct=0, sc=8) 00:32:50.056 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Write completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 Read completed with error (sct=0, sc=8) 00:32:50.057 starting I/O failed 00:32:50.057 [2024-12-15 16:20:18.407038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.991 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3014555 Killed "${NVMF_APP[@]}" "$@" 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3016462 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3016462 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3016462 ']' 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.991 16:20:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 [2024-12-15 16:20:19.292614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:50.991 [2024-12-15 16:20:19.292669] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.991 [2024-12-15 16:20:19.379530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Read completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 Write completed with error (sct=0, sc=8) 00:32:50.991 starting I/O failed 00:32:50.991 [2024-12-15 16:20:19.412166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:50.991 [2024-12-15 16:20:19.417754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.991 [2024-12-15 16:20:19.417793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.991 [2024-12-15 16:20:19.417803] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.991 [2024-12-15 16:20:19.417812] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.991 [2024-12-15 16:20:19.417820] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.991 [2024-12-15 16:20:19.417939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:50.991 [2024-12-15 16:20:19.418050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:50.991 [2024-12-15 16:20:19.418159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:50.991 [2024-12-15 16:20:19.418161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:51.557 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.557 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:51.557 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:51.557 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.557 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 Malloc0 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 [2024-12-15 16:20:20.206304] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf2dd0/0x1bff4f0) succeed. 00:32:51.815 [2024-12-15 16:20:20.217346] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bf4410/0x1c40b90) succeed. 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 [2024-12-15 16:20:20.356407] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.815 16:20:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3015669 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Write completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 Read completed with error (sct=0, sc=8) 00:32:52.074 starting I/O failed 00:32:52.074 [2024-12-15 16:20:20.417221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.007 Read completed with error (sct=0, sc=8) 00:32:53.007 starting I/O failed 00:32:53.007 Read completed with error (sct=0, sc=8) 00:32:53.007 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Write completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 Read completed with error (sct=0, sc=8) 00:32:53.008 starting I/O failed 00:32:53.008 [2024-12-15 16:20:21.422520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.008 [2024-12-15 16:20:21.422581] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:53.008 A controller has encountered a failure and is being reset. 00:32:53.008 Resorting to new failover address 192.168.100.9 00:32:53.008 [2024-12-15 16:20:21.422708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.008 [2024-12-15 16:20:21.422791] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:53.008 [2024-12-15 16:20:21.454815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:53.008 Controller properly reset. 00:32:57.187 Initializing NVMe Controllers 00:32:57.187 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.187 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.187 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:57.187 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:57.187 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:57.187 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:57.187 Initialization complete. Launching workers. 00:32:57.187 Starting thread on core 1 00:32:57.187 Starting thread on core 2 00:32:57.187 Starting thread on core 3 00:32:57.187 Starting thread on core 0 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:32:57.187 00:32:57.187 real 0m10.267s 00:32:57.187 user 1m2.957s 00:32:57.187 sys 0m1.809s 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:57.187 ************************************ 00:32:57.187 END TEST nvmf_target_disconnect_tc3 00:32:57.187 ************************************ 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:57.187 rmmod nvme_rdma 00:32:57.187 rmmod nvme_fabrics 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3016462 ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3016462 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3016462 ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3016462 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3016462 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3016462' 00:32:57.187 killing process with pid 3016462 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3016462 00:32:57.187 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3016462 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:32:57.446 00:32:57.446 real 0m30.726s 00:32:57.446 user 1m56.176s 00:32:57.446 sys 0m10.578s 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:57.446 ************************************ 00:32:57.446 END TEST nvmf_target_disconnect 00:32:57.446 ************************************ 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:57.446 00:32:57.446 real 7m7.550s 00:32:57.446 user 20m15.634s 00:32:57.446 sys 1m38.377s 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.446 16:20:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.446 ************************************ 00:32:57.446 END TEST nvmf_host 00:32:57.446 ************************************ 00:32:57.446 16:20:26 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:32:57.446 00:32:57.446 real 26m29.458s 00:32:57.446 user 78m6.261s 00:32:57.446 sys 6m23.220s 00:32:57.446 16:20:26 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.446 16:20:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:57.446 ************************************ 00:32:57.446 END TEST nvmf_rdma 00:32:57.446 ************************************ 00:32:57.705 16:20:26 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:57.705 16:20:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:57.705 16:20:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.705 16:20:26 -- common/autotest_common.sh@10 -- # set +x 00:32:57.705 ************************************ 00:32:57.705 START TEST spdkcli_nvmf_rdma 00:32:57.705 ************************************ 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:57.705 * Looking for test storage... 00:32:57.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:57.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.705 --rc genhtml_branch_coverage=1 00:32:57.705 --rc genhtml_function_coverage=1 00:32:57.705 --rc genhtml_legend=1 00:32:57.705 --rc geninfo_all_blocks=1 00:32:57.705 --rc geninfo_unexecuted_blocks=1 00:32:57.705 00:32:57.705 ' 00:32:57.705 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:57.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.705 --rc genhtml_branch_coverage=1 00:32:57.705 --rc genhtml_function_coverage=1 00:32:57.706 --rc genhtml_legend=1 00:32:57.706 --rc geninfo_all_blocks=1 00:32:57.706 --rc geninfo_unexecuted_blocks=1 00:32:57.706 00:32:57.706 ' 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:57.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.706 --rc genhtml_branch_coverage=1 00:32:57.706 --rc genhtml_function_coverage=1 00:32:57.706 --rc genhtml_legend=1 00:32:57.706 --rc geninfo_all_blocks=1 00:32:57.706 --rc geninfo_unexecuted_blocks=1 00:32:57.706 00:32:57.706 ' 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:57.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.706 --rc genhtml_branch_coverage=1 00:32:57.706 --rc genhtml_function_coverage=1 00:32:57.706 --rc genhtml_legend=1 00:32:57.706 --rc geninfo_all_blocks=1 00:32:57.706 --rc geninfo_unexecuted_blocks=1 00:32:57.706 00:32:57.706 ' 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:32:57.706 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.964 16:20:26 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.965 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3017661 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3017661 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 3017661 ']' 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.965 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:57.965 [2024-12-15 16:20:26.360488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:57.965 [2024-12-15 16:20:26.360542] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017661 ] 00:32:57.965 [2024-12-15 16:20:26.429911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:57.965 [2024-12-15 16:20:26.469088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.965 [2024-12-15 16:20:26.469091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:32:58.223 16:20:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:06.327 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:06.327 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:06.327 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:06.327 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # is_hw=yes 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # rdma_device_init 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@526 -- # allocate_nic_ips 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:06.327 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:06.327 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:06.327 altname enp217s0f0np0 00:33:06.327 altname ens818f0np0 00:33:06.327 inet 192.168.100.8/24 scope global mlx_0_0 00:33:06.327 valid_lft forever preferred_lft forever 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:06.327 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:06.327 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:06.327 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:06.327 altname enp217s0f1np1 00:33:06.327 altname ens818f1np1 00:33:06.327 inet 192.168.100.9/24 scope global mlx_0_1 00:33:06.327 valid_lft forever preferred_lft forever 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # return 0 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:33:06.328 192.168.100.9' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:33:06.328 192.168.100.9' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # head -n 1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:33:06.328 192.168.100.9' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # tail -n +2 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # head -n 1 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:06.328 16:20:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:06.328 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:06.328 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:06.328 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:06.328 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:06.328 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:06.328 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:06.328 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:06.328 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:06.328 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:06.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:06.328 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:06.328 ' 00:33:07.696 [2024-12-15 16:20:36.080404] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20428f0/0x2050f30) succeed. 00:33:07.696 [2024-12-15 16:20:36.090171] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2043fd0/0x20d0fc0) succeed. 00:33:09.064 [2024-12-15 16:20:37.364338] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:33:11.585 [2024-12-15 16:20:39.611257] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:33:13.478 [2024-12-15 16:20:41.541530] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:14.847 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:14.847 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:14.847 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:14.847 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:14.847 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:14.847 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:14.847 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:33:14.847 16:20:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:15.103 16:20:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:15.103 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:15.103 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:15.103 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:15.103 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:33:15.103 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:33:15.103 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:15.103 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:15.103 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:15.103 ' 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:33:20.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:33:20.353 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:20.353 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:20.353 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3017661 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 3017661 ']' 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 3017661 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017661 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017661' 00:33:20.353 killing process with pid 3017661 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 3017661 00:33:20.353 16:20:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 3017661 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:20.611 rmmod nvme_rdma 00:33:20.611 rmmod nvme_fabrics 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:33:20.611 00:33:20.611 real 0m23.094s 00:33:20.611 user 0m49.029s 00:33:20.611 sys 0m6.184s 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:20.611 16:20:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:20.611 ************************************ 00:33:20.611 END TEST spdkcli_nvmf_rdma 00:33:20.611 ************************************ 00:33:20.868 16:20:49 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:20.868 16:20:49 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:20.868 16:20:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:20.868 16:20:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:20.868 16:20:49 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:20.868 16:20:49 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:20.868 16:20:49 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:20.868 16:20:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:20.868 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:33:20.868 16:20:49 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:20.868 16:20:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:20.868 16:20:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:20.868 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:33:26.130 INFO: APP EXITING 00:33:26.130 INFO: killing all VMs 00:33:26.130 INFO: killing vhost app 00:33:26.130 INFO: EXIT DONE 00:33:29.415 Waiting for block devices as requested 00:33:29.415 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:29.415 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:29.674 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:29.674 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:29.674 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:29.933 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:29.933 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:29.933 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:30.191 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:30.191 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:30.191 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:30.448 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:33.725 Cleaning 00:33:33.725 Removing: /var/run/dpdk/spdk0/config 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:33.725 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:33.725 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:33.725 Removing: /var/run/dpdk/spdk1/config 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:33.725 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:33.725 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:33.725 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:33.725 Removing: /var/run/dpdk/spdk2/config 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:33.725 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:33.726 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:33.726 Removing: /var/run/dpdk/spdk3/config 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:33.726 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:33.726 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:33.726 Removing: /var/run/dpdk/spdk4/config 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:33.726 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:33.726 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:33.726 Removing: /dev/shm/bdevperf_trace.pid2666367 00:33:33.726 Removing: /dev/shm/bdevperf_trace.pid2912624 00:33:33.726 Removing: /dev/shm/bdev_svc_trace.1 00:33:33.726 Removing: /dev/shm/nvmf_trace.0 00:33:33.726 Removing: /dev/shm/spdk_tgt_trace.pid2622210 00:33:33.726 Removing: /var/run/dpdk/spdk0 00:33:33.726 Removing: /var/run/dpdk/spdk1 00:33:33.726 Removing: /var/run/dpdk/spdk2 00:33:33.726 Removing: /var/run/dpdk/spdk3 00:33:33.726 Removing: /var/run/dpdk/spdk4 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2619656 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2620920 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2622210 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2622856 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2623731 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2623970 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2625034 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2625086 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2625449 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2630493 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2632049 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2632368 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2632689 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2632953 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2633114 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2633401 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2633684 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2634010 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2634858 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2637905 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2638104 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2638368 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2638449 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2638956 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2639146 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2639759 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2639772 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2640064 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2640078 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2640370 00:33:33.726 Removing: /var/run/dpdk/spdk_pid2640380 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2641007 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2641248 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2641556 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2645830 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2650102 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2660511 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2661347 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2666367 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2666643 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2670667 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2676599 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2679391 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2689533 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2714776 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2718615 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2813337 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2818653 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2824681 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2832969 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2863925 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2869198 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2910777 00:33:34.001 Removing: /var/run/dpdk/spdk_pid2911546 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2912624 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2913586 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2918346 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2925264 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2926300 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2927110 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2928125 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2928437 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2932775 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2932844 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2937241 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2937820 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2938464 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2939090 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2939246 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2941638 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2944098 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2945952 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2947853 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2949707 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2951560 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2957685 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2958306 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2960639 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2961678 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2968600 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2971269 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2976838 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2987554 00:33:34.002 Removing: /var/run/dpdk/spdk_pid2987558 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3007336 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3007601 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3013449 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3013766 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3015669 00:33:34.002 Removing: /var/run/dpdk/spdk_pid3017661 00:33:34.002 Clean 00:33:34.277 16:21:02 -- common/autotest_common.sh@1451 -- # return 0 00:33:34.277 16:21:02 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:34.277 16:21:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.277 16:21:02 -- common/autotest_common.sh@10 -- # set +x 00:33:34.277 16:21:02 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:34.277 16:21:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.277 16:21:02 -- common/autotest_common.sh@10 -- # set +x 00:33:34.277 16:21:02 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:33:34.277 16:21:02 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:33:34.277 16:21:02 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:33:34.277 16:21:02 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:34.277 16:21:02 -- spdk/autotest.sh@394 -- # hostname 00:33:34.277 16:21:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:33:34.542 geninfo: WARNING: invalid characters removed from testname! 00:33:56.460 16:21:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:56.718 16:21:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:58.618 16:21:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:59.993 16:21:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:01.893 16:21:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:03.268 16:21:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:05.170 16:21:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:05.170 16:21:33 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:05.170 16:21:33 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:05.170 16:21:33 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:05.170 16:21:33 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:05.170 16:21:33 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:05.170 16:21:33 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:05.170 16:21:33 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:05.170 16:21:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:05.170 16:21:33 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:05.170 16:21:33 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:05.170 16:21:33 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:05.170 16:21:33 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:05.170 16:21:33 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:05.170 16:21:33 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:05.170 16:21:33 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:05.170 16:21:33 -- scripts/common.sh@344 -- $ case "$op" in 00:34:05.170 16:21:33 -- scripts/common.sh@345 -- $ : 1 00:34:05.170 16:21:33 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:05.170 16:21:33 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.170 16:21:33 -- scripts/common.sh@365 -- $ decimal 1 00:34:05.170 16:21:33 -- scripts/common.sh@353 -- $ local d=1 00:34:05.170 16:21:33 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:05.170 16:21:33 -- scripts/common.sh@355 -- $ echo 1 00:34:05.170 16:21:33 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:05.170 16:21:33 -- scripts/common.sh@366 -- $ decimal 2 00:34:05.170 16:21:33 -- scripts/common.sh@353 -- $ local d=2 00:34:05.170 16:21:33 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:05.170 16:21:33 -- scripts/common.sh@355 -- $ echo 2 00:34:05.170 16:21:33 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:05.170 16:21:33 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:05.170 16:21:33 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:05.170 16:21:33 -- scripts/common.sh@368 -- $ return 0 00:34:05.170 16:21:33 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.170 16:21:33 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.170 --rc genhtml_branch_coverage=1 00:34:05.170 --rc genhtml_function_coverage=1 00:34:05.170 --rc genhtml_legend=1 00:34:05.170 --rc geninfo_all_blocks=1 00:34:05.170 --rc geninfo_unexecuted_blocks=1 00:34:05.170 00:34:05.170 ' 00:34:05.170 16:21:33 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.170 --rc genhtml_branch_coverage=1 00:34:05.170 --rc genhtml_function_coverage=1 00:34:05.170 --rc genhtml_legend=1 00:34:05.170 --rc geninfo_all_blocks=1 00:34:05.170 --rc geninfo_unexecuted_blocks=1 00:34:05.170 00:34:05.170 ' 00:34:05.170 16:21:33 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.170 --rc genhtml_branch_coverage=1 00:34:05.170 --rc genhtml_function_coverage=1 00:34:05.170 --rc genhtml_legend=1 00:34:05.170 --rc geninfo_all_blocks=1 00:34:05.170 --rc geninfo_unexecuted_blocks=1 00:34:05.170 00:34:05.170 ' 00:34:05.170 16:21:33 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.170 --rc genhtml_branch_coverage=1 00:34:05.170 --rc genhtml_function_coverage=1 00:34:05.170 --rc genhtml_legend=1 00:34:05.170 --rc geninfo_all_blocks=1 00:34:05.170 --rc geninfo_unexecuted_blocks=1 00:34:05.170 00:34:05.170 ' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:05.170 16:21:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:05.170 16:21:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:05.170 16:21:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.170 16:21:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.170 16:21:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.170 16:21:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.170 16:21:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.170 16:21:33 -- paths/export.sh@5 -- $ export PATH 00:34:05.170 16:21:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.170 16:21:33 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:34:05.170 16:21:33 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:05.170 16:21:33 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734276093.XXXXXX 00:34:05.170 16:21:33 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734276093.0pJPWO 00:34:05.170 16:21:33 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:05.170 16:21:33 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:34:05.170 16:21:33 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:05.170 16:21:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:05.170 16:21:33 -- common/autotest_common.sh@10 -- $ set +x 00:34:05.170 16:21:33 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:34:05.170 16:21:33 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:05.170 16:21:33 -- pm/common@17 -- $ local monitor 00:34:05.170 16:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.170 16:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.170 16:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.170 16:21:33 -- pm/common@21 -- $ date +%s 00:34:05.170 16:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.170 16:21:33 -- pm/common@21 -- $ date +%s 00:34:05.170 16:21:33 -- pm/common@21 -- $ date +%s 00:34:05.170 16:21:33 -- pm/common@25 -- $ sleep 1 00:34:05.170 16:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734276093 00:34:05.170 16:21:33 -- pm/common@21 -- $ date +%s 00:34:05.170 16:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734276093 00:34:05.170 16:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734276093 00:34:05.170 16:21:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734276093 00:34:05.170 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734276093_collect-cpu-temp.pm.log 00:34:05.171 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734276093_collect-cpu-load.pm.log 00:34:05.171 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734276093_collect-vmstat.pm.log 00:34:05.429 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734276093_collect-bmc-pm.bmc.pm.log 00:34:06.363 16:21:34 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:06.363 16:21:34 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:06.363 16:21:34 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:06.363 16:21:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:06.363 16:21:34 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:06.363 16:21:34 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:06.363 16:21:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:06.363 16:21:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:06.363 16:21:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:06.363 16:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:06.363 16:21:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:06.363 16:21:34 -- pm/common@44 -- $ pid=3037176 00:34:06.363 16:21:34 -- pm/common@50 -- $ kill -TERM 3037176 00:34:06.363 16:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:06.363 16:21:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:06.363 16:21:34 -- pm/common@44 -- $ pid=3037178 00:34:06.363 16:21:34 -- pm/common@50 -- $ kill -TERM 3037178 00:34:06.363 16:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:06.363 16:21:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:06.363 16:21:34 -- pm/common@44 -- $ pid=3037180 00:34:06.363 16:21:34 -- pm/common@50 -- $ kill -TERM 3037180 00:34:06.363 16:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:06.363 16:21:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:06.363 16:21:34 -- pm/common@44 -- $ pid=3037207 00:34:06.363 16:21:34 -- pm/common@50 -- $ sudo -E kill -TERM 3037207 00:34:06.363 + [[ -n 2524560 ]] 00:34:06.363 + sudo kill 2524560 00:34:06.373 [Pipeline] } 00:34:06.387 [Pipeline] // stage 00:34:06.392 [Pipeline] } 00:34:06.405 [Pipeline] // timeout 00:34:06.409 [Pipeline] } 00:34:06.423 [Pipeline] // catchError 00:34:06.428 [Pipeline] } 00:34:06.442 [Pipeline] // wrap 00:34:06.447 [Pipeline] } 00:34:06.456 [Pipeline] // catchError 00:34:06.462 [Pipeline] stage 00:34:06.464 [Pipeline] { (Epilogue) 00:34:06.473 [Pipeline] catchError 00:34:06.474 [Pipeline] { 00:34:06.483 [Pipeline] echo 00:34:06.484 Cleanup processes 00:34:06.487 [Pipeline] sh 00:34:06.775 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:06.775 3037341 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:34:06.775 3037740 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:06.787 [Pipeline] sh 00:34:07.066 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:07.066 ++ grep -v 'sudo pgrep' 00:34:07.066 ++ awk '{print $1}' 00:34:07.066 + sudo kill -9 3037341 00:34:07.078 [Pipeline] sh 00:34:07.364 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:07.364 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:34:12.641 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:34:16.846 [Pipeline] sh 00:34:17.129 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:17.129 Artifacts sizes are good 00:34:17.142 [Pipeline] archiveArtifacts 00:34:17.148 Archiving artifacts 00:34:17.318 [Pipeline] sh 00:34:17.643 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:34:17.657 [Pipeline] cleanWs 00:34:17.666 [WS-CLEANUP] Deleting project workspace... 00:34:17.666 [WS-CLEANUP] Deferred wipeout is used... 00:34:17.673 [WS-CLEANUP] done 00:34:17.675 [Pipeline] } 00:34:17.691 [Pipeline] // catchError 00:34:17.704 [Pipeline] sh 00:34:17.988 + logger -p user.info -t JENKINS-CI 00:34:17.997 [Pipeline] } 00:34:18.009 [Pipeline] // stage 00:34:18.014 [Pipeline] } 00:34:18.029 [Pipeline] // node 00:34:18.034 [Pipeline] End of Pipeline 00:34:18.091 Finished: SUCCESS